Lecture 5 - Memory Management
Lecture 5 - Memory Management
1
Memory Management
1. Basic memory management
2. Partitioning
3. Swapping
4. Paging
2
Basic Memory Management
• Ideally programmers want memory that is
– large
– fast
– non volatile
• Memory hierarchy
– small amount of fast, expensive memory – cache
– medium-speed, medium price volatile main memory (RAM)
– gigabytes of slow, cheap, nonvolatile disk storage
4
Basic Memory Management
Two broad classes of memory management systems
– Those that transfer processes to and from disk during execution.
• Called swapping
5
Basic Memory Management
The simplest memory management is to run one program at a time,
sharing the memory between that program and the OS.
Okay if
– Only have one thing to do
– Memory available approximately equates to
memory required.
• Otherwise,
– Poor CPU utilisation in the presence of I/O
Waiting & Poor memory utilisation with a variable job mix
• Idea
Subdivide memory and run more than one process at once!!!!
– Multiprogramming, Multitasking 6
Fixed Partitioning
Problem: How to divide memory?
Approach 1: Divide memory into fixed equal-sized partitions
Any process <= partition size can be loaded into any partition
Any unused space in the partition is wasted & it is called internal
Fragmentation. Processes smaller than main memory, but larger
than a partition cannot run.
7
Fixed Partitioning
Approach 2: Fixed, variable-sized partitions Multiple Queues:
– Place process in queue for smallest partition that it fits in.
Problem:
– Some partitions may be idle, small jobs available, but only
large partition free.
8
Fixed Partitioning
Approach 2: Fixed, variable-sized partitions Single Queue:
• Search for any jobs that fits Small jobs in large partition
if necessary
– Increases internal memory fragmentation
9
Fixed Partition Summary
Simple
• Easy to implement
• Can result in poor memory utilization
– Due to internal fragmentation
10
Dynamic Partitioning
Partitions are of variable length
• Process is allocated exactly what it needs
– Assume a process knows what it needs
• Effect of dynamic partitioning
11
Dynamic Partitioning
Example
OS (8M)
• External Fragmentation
• Memory external to all
P2
P1
(14M)
processes is fragmented
(20M)
Empty (6M)
• Can be resolved using
Empty
P4(8M)
P2
compaction
(56M)
(14M)
Empty (6M) – OS moves processes so that
they are contiguous
P3 – Time consuming and wastes
(18M)
CPU time
Empty (4M)
In previous diagram
– There are two 6M and one 4M free in total, but it can’t be used to run any
more processes requiring > 6M as it is fragmented (external fragmentation)
End up with unusable holes.
14
Dynamic Partitioning
• When memory is assigned dynamically, the
OS must manage it.
• There are two ways to keep track of
memory usage:
– with Bit Maps
– with lists
15
Memory Management with Bit Maps
16
Dynamic Partition Allocation Algorithms
Basic Requirements
– Quickly locate a free partition satisfying the request
– Minimise external fragmentation
– Efficiently support merging two adjacent free partitions into a
larger partition
17
Dynamic Partition Allocation Algorithms
• When processes and holes are kept in a list sorted by address,
several algorithms can be used to allocate memory for the
newly created or swapped in process.
1. First fit:
a. The memory manager scans along the list until it finds
the hole that is big enough
b. The hole is broken into two pieces; one for process and
unused memory
c. It is fast since it searches as little as possible.
2. Next fit:
a. Same as first fit, but keeps tracks of where it is
whenever it founds a suitable hole.
b. Next time it is called to find a hole, it starts searching
the list from the place where it left off last time.
18
Dynamic Partition Allocation Algorithms
3. Best fit:
a. It searches the entire list and takes the smallest hole that
is adequate, instead of breaking up a big hole that might
be needed later.
b. It is slower because it must search the entire list every
time it is called.
c. First and best fit tends to fill up the memory with tiny
useless holes.
4. Worst fit:
a. Tries to eliminate the problem of generating tiny holes;
it takes the largest available hole, so the unused memory
will be big enough to be useful.
19
Dynamic Partition Allocation Algorithms
5. Quick fit:
a. Maintain separate lists for some common size requested
b. E.g. a table of n entries; in which the first entry is a
pointer to the list of 12K, the second entry is a pointer
to a list of 20K etc
c. With quick fit, finding a hole of the required size is
extremely fast
d. The disadvantage is when a process terminates or is
swapped out, finding a neighbor to merge is expensive.
e. If merging is not done, memory will fragment into a
large number of small holes into which no process will
fit.
20
Allocation
In a fixed partitioning scheme limits the number of active
processes and may use space inefficiently if there is a poor
match between available partition sizes and process sizes.
Buddy System
• Entire space available is treated as a single block of 2U
• If a request of size s where 2U-1 < s <= 2U
– entire block is allocated
• Otherwise block is split into two equal buddies
– Process continues until smallest block greater than or equal
to s is generated
Example of Buddy System
Timesharing
• With timesharing system sometimes there are
no enough memory to hold all the current
processes. Excess processes must be kept on
disk and brought in to run dynamically.
• There are two approaches:
1. Swapping: bringing in each process in its entirety,
running it for a while, then putting it back on the
disk.
2. Virtual Memory: Allows programs to run even if
they are partially in the main memory.
24
Schematic View of Swapping
25
Swapping
A.0
A.1
A.2
A.3
D.0
B.0
D.1
B.1
D.2
B.2
C.0
C.1
C.2
C.3
D.3
D.4
Page Table
Replacement Policy
Replacement Policy
• Note that the clock policy adapts at protecting frames 2 and 5 from
replacement.
Clock Policy