0% found this document useful (0 votes)
67 views

Lecture 5 - Memory Management

The document discusses various memory management techniques used in operating systems: 1. It describes basic memory management concepts like memory hierarchy and the role of the memory manager. 2. It covers static memory allocation techniques like fixed and dynamic partitioning, as well as the buddy system approach. 3. It also discusses dynamic memory allocation using paging and swapping to allow processes to run even if partially in memory, along with page replacement policies.

Uploaded by

samwel sitta
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
67 views

Lecture 5 - Memory Management

The document discusses various memory management techniques used in operating systems: 1. It describes basic memory management concepts like memory hierarchy and the role of the memory manager. 2. It covers static memory allocation techniques like fixed and dynamic partitioning, as well as the buddy system approach. 3. It also discusses dynamic memory allocation using paging and swapping to allow processes to run even if partially in memory, along with page replacement policies.

Uploaded by

samwel sitta
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 47

Memory Management

1
Memory Management
1. Basic memory management
2. Partitioning
3. Swapping
4. Paging

2
Basic Memory Management
• Ideally programmers want memory that is
– large
– fast
– non volatile
• Memory hierarchy
– small amount of fast, expensive memory – cache
– medium-speed, medium price volatile main memory (RAM)
– gigabytes of slow, cheap, nonvolatile disk storage

• Memory manager, a part of OS, handles the


memory hierarchy
3
Basic Memory Management
• Memory manager:
– Keeps tracks of which part of the memory is in use and which
ones are not.

– Allocate memory to processes when they need it and


deallocate when they are done.

– To manage Movement (swapping) between main memory and


disk when main memory is small to hold all processes.

4
Basic Memory Management
Two broad classes of memory management systems
– Those that transfer processes to and from disk during execution.
• Called swapping

– Those that don’t


• Simple
• Might find this scheme in an embedded device, phone, smartcard,
or PDA.

5
Basic Memory Management
The simplest memory management is to run one program at a time,
sharing the memory between that program and the OS.
Okay if
– Only have one thing to do
– Memory available approximately equates to
memory required.

• Otherwise,
– Poor CPU utilisation in the presence of I/O
Waiting & Poor memory utilisation with a variable job mix

• Idea
Subdivide memory and run more than one process at once!!!!
– Multiprogramming, Multitasking 6
Fixed Partitioning
Problem: How to divide memory?
Approach 1: Divide memory into fixed equal-sized partitions
Any process <= partition size can be loaded into any partition
Any unused space in the partition is wasted & it is called internal
Fragmentation. Processes smaller than main memory, but larger
than a partition cannot run.

7
Fixed Partitioning
Approach 2: Fixed, variable-sized partitions Multiple Queues:
– Place process in queue for smallest partition that it fits in.
Problem:
– Some partitions may be idle, small jobs available, but only
large partition free.

8
Fixed Partitioning
Approach 2: Fixed, variable-sized partitions Single Queue:
• Search for any jobs that fits Small jobs in large partition
if necessary
– Increases internal memory fragmentation

9
Fixed Partition Summary
Simple
• Easy to implement
• Can result in poor memory utilization
– Due to internal fragmentation

• Used on OS/360 operating system


– Old mainframe batch system
• Still applicable for simple embedded systems

10
Dynamic Partitioning
Partitions are of variable length
• Process is allocated exactly what it needs
– Assume a process knows what it needs
• Effect of dynamic partitioning

11
Dynamic Partitioning
Example
OS (8M)
• External Fragmentation
• Memory external to all
P2
P1
(14M)
processes is fragmented
(20M)
Empty (6M)
• Can be resolved using
Empty
P4(8M)
P2
compaction
(56M)
(14M)
Empty (6M) – OS moves processes so that
they are contiguous
P3 – Time consuming and wastes
(18M)
CPU time
Empty (4M)

Refer to Figure 7.4


Dynamic Partitioning
• Effect of dynamic partitioning

In previous diagram
– There are two 6M and one 4M free in total, but it can’t be used to run any
more processes requiring > 6M as it is fragmented (external fragmentation) 
End up with unusable holes.

• Reduce external fragmentation by compaction


– Shuffle memory contents to place all free memory together in one large block.
– Compaction is possible only if relocation is dynamic, and is done at execution
time.
13
Compaction
External fragmentation can be reduced by compaction
– Only if we can relocate running programs
– Generally requires hardware support

14
Dynamic Partitioning
• When memory is assigned dynamically, the
OS must manage it.
• There are two ways to keep track of
memory usage:
– with Bit Maps
– with lists

15
Memory Management with Bit Maps

(a) Part of memory with 5 processes, 3 holes


– tick marks shows the memory allocation units
– shaded regions are free
(b) Corresponding bit map
(c )Same information as a list

16
Dynamic Partition Allocation Algorithms
Basic Requirements
– Quickly locate a free partition satisfying the request
– Minimise external fragmentation
– Efficiently support merging two adjacent free partitions into a
larger partition

17
Dynamic Partition Allocation Algorithms
• When processes and holes are kept in a list sorted by address,
several algorithms can be used to allocate memory for the
newly created or swapped in process.
1. First fit:
a. The memory manager scans along the list until it finds
the hole that is big enough
b. The hole is broken into two pieces; one for process and
unused memory
c. It is fast since it searches as little as possible.
2. Next fit:
a. Same as first fit, but keeps tracks of where it is
whenever it founds a suitable hole.
b. Next time it is called to find a hole, it starts searching
the list from the place where it left off last time.

18
Dynamic Partition Allocation Algorithms
3. Best fit:
a. It searches the entire list and takes the smallest hole that
is adequate, instead of breaking up a big hole that might
be needed later.
b. It is slower because it must search the entire list every
time it is called.
c. First and best fit tends to fill up the memory with tiny
useless holes.
4. Worst fit:
a. Tries to eliminate the problem of generating tiny holes;
it takes the largest available hole, so the unused memory
will be big enough to be useful.

19
Dynamic Partition Allocation Algorithms
5. Quick fit:
a. Maintain separate lists for some common size requested
b. E.g. a table of n entries; in which the first entry is a
pointer to the list of 12K, the second entry is a pointer
to a list of 20K etc
c. With quick fit, finding a hole of the required size is
extremely fast
d. The disadvantage is when a process terminates or is
swapped out, finding a neighbor to merge is expensive.
e. If merging is not done, memory will fragment into a
large number of small holes into which no process will
fit.

20
Allocation
In a fixed partitioning scheme limits the number of active
processes and may use space inefficiently if there is a poor
match between available partition sizes and process sizes.

A dynamic partitioning scheme is more complex to maintain


and includes the overhead of compaction.

An interesting compromise is the buddy system.

Buddy System
• Entire space available is treated as a single block of 2U
• If a request of size s where 2U-1 < s <= 2U
– entire block is allocated
• Otherwise block is split into two equal buddies
– Process continues until smallest block greater than or equal
to s is generated
Example of Buddy System
Timesharing
• With timesharing system sometimes there are
no enough memory to hold all the current
processes. Excess processes must be kept on
disk and brought in to run dynamically.
• There are two approaches:
1. Swapping: bringing in each process in its entirety,
running it for a while, then putting it back on the
disk.
2. Virtual Memory: Allows programs to run even if
they are partially in the main memory.

24
Schematic View of Swapping

25
Swapping

Memory allocation changes as


– processes come into memory
– leave memory
Shaded regions are unused memory 26
Paging

• Partition memory into small equal fixed-


size chunks and divide each process into the
same size chunks
• The chunks of a process are called pages
• The chunks of memory are called frames
Paging

• Operating system maintains a page table for


each process
– Contains the frame location for each page in the
process
– Memory address consist of a page number and
offset within the page
Processes and Frames

A.0
A.1
A.2
A.3
D.0
B.0
D.1
B.1
D.2
B.2
C.0
C.1
C.2
C.3
D.3
D.4
Page Table
Replacement Policy
Replacement Policy

• When all of the frames in main memory are


occupied and it is necessary to bring in a
new page, the replacement policy
determines which page currently in memory
is to be replaced.
But…

• Which page is to be replaced?

• Page removed should be the page least


likely to be referenced in the near future
– How can that be determined?
Replacement Policy:
Frame Locking
One restriction on replacement policy needs to be
mentioned before looking at various algorithms:
• Some of the frames in main memory may be locked.
• When a frame is locked, the page currently stored in
that frame may not be replaced.
• Much of the kernel of the operating system is held on
locked frames, as well as key control structures.
• In addition, I/O buffers and other time-critical areas
may be locked into main memory frames.
Basic Replacement Algorithms

• There are certain basic algorithms that are


used for the selection of a page to replace,
they include
– Optimal
– Least recently used (LRU)
– First-in-first-out (FIFO)
– Clock
Examples

• An example of the implementation of these


policies will use a page address stream
formed by executing the program is
–232152453252
• Which means that the first page referenced
is 2,
– the second page referenced is 3,
– And so on.
Optimal policy

• Selects for replacement that page for which the


time to the next reference is the longest

• This policy results in the fewest number of


page faults.

• BUT Clearly, this policy is impossible to


implement, because it would require the
operating system to have perfect knowledge of
future events.
Optimal Policy
Example

• The optimal policy produces


three page faults after the frame
allocation has been filled.
Least Recently Used (LRU)

• Replaces the page that has not been


referenced for the longest time
• Difficult to implement
– One approach is to tag each page with the time
of last reference.
– This requires a great deal of overhead.
LRU Example

• The LRU policy does nearly as


well as the optimal policy.
– In this example, there are four page
faults
First-in, first-out (FIFO)

• Treats page frames allocated to a process as


a circular buffer
• Pages are removed in round-robin style
– Simplest replacement policy to implement
• Page that has been in memory the longest is
replaced
– But, these pages may be needed again very
soon if it hasn’t truly fallen out of use
FIFO Example

• The FIFO policy results in six


page faults.
– Note that LRU recognizes that pages
2 and 5 are referenced more
frequently than other pages, whereas
FIFO does not.
Clock Policy

• Uses and additional bit called a “use bit”


• When a page is first loaded in memory or
referenced, the use bit is set to 1
• When it is time to replace a page, the OS
scans the set flipping all 1’s to 0
• The first frame encountered with the use bit
already set to 0 is replaced.
Clock Policy Example

• The presence of an asterisk indicates that the corresponding use bit is


equal to 1,
– and the arrow indicates the current position of the pointer.

• Note that the clock policy adapts at protecting frames 2 and 5 from
replacement.
Clock Policy

Consider a replacement of a page from the buffer with


incoming page 727,
Clock Policy
Combined Examples

You might also like