0% found this document useful (0 votes)
19 views

Memory Management

The document discusses memory management concepts including memory hierarchy, memory allocation strategies, and memory placement algorithms. It covers basic memory management concepts like memory organization, instruction execution cycle, and translation of programs to machine code. Placement algorithms like first fit, next fit, best fit, and worst fit are described.

Uploaded by

imrank39199
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

Memory Management

The document discusses memory management concepts including memory hierarchy, memory allocation strategies, and memory placement algorithms. It covers basic memory management concepts like memory organization, instruction execution cycle, and translation of programs to machine code. Placement algorithms like first fit, next fit, best fit, and worst fit are described.

Uploaded by

imrank39199
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

Memory Management

Dr. M. Bilal Qureshi


Background
• Program must be brought (from disk) into
memory and placed within a process to run
• Main memory and registers are only storage
that CPU can access directly
• Registers are accessed in one CPU clock (or
less)
• Main memory can take many cycles
• Cache sits between main memory and CPU
registers
• Protection of memory required to ensure
correct operation
Basic Concepts
• Memory consists of a large array of words or
bytes, each with its own address.
• Instruction-fetch-execution cycle, e.g., first
fetches an instruction from memory, which is
then coded and executed.
• Operands may have to be fetched from
memory. After instruction has been executed,
the results are stored back in memory.
• PC : contains address of the next instruction to be fetched. Unless
instructed otherwise, the processor always increments the PC after
each instruction fetch so that the next instruction in sequence (i.e.,
instruction located at the next higher memory address).
• IR : contains the instruction most recently fetched. 4
Instruction Execution

• Instruction-fetch-execute cycle.
• A program to be executed by a CPU consists of set of
instructions stored in memory.
• The processor reads (fetches) instructions from memory
one at a time and executes each instruction.
• Program execution halts only if the processor is turned off,
some sort of unrecoverable error occurs, or a program
instruction that halts the processor is encountered.
5
Memory Management
• Ideally programmers want memory that is
– Large
– Fast
– Non volatile (No loss in case of power failure)
• Memory hierarchy
– Very small, extremely fast, extremely expensive, and volatile –
CPU registers
– Small, very fast, expensive, and volatile – cache
– Some medium-speed, medium price, and volatile – main
memory
– Gigabytes of slow, cheap, and non-volatile – secondary
storage
• Memory managers handle the memory hierarchy 6
7
Basic Memory Management

Three simple ways of organizing memory with OS and one user process.
(a) Used by mainframes and minicomputers (b) Used by some handheld computers
and embedded systems (c) Used by early personal computers
8
Source to Execution
• Translation of a source program in a high-level
or assembly language.
• This process generates machine language
executable code (known as a binary image)
for the source program.
• To execute binary code, it is loaded into the
main memory and the CPU state is set
appropriately.

9
Multiple Programs Without Memory Abstraction
Constant 16384 is added to every program address during the load process
Called static relocation

Illustration of the relocation problem. (a) A 16-KB program. (b) Another 16-
KB program. (c) Two programs loaded consecutively into memory
Swapping (1)

Memory allocation changes as


– processes come into memory
– leave memory
Shaded regions are unused memory
11
Swapping(2)
• Swapping creates multiple holes in memory
• Combining them all into one big one by moving all
the processes downward called memory compaction.
• Its usually not done because it requires a lot of CPU
time
• How much memory should be allocated for a process
when it is created or swapped in:
– Fixed Allocation: If processes are created with a fixed size
the operating system allocates exactly what is needed
– Dynamic Allocation: Dynamically allocating memory to
growing process. If a hole is adjacent to the process, it can
be allocated and the process allowed to grow into the hole.
12
Swapping (3)

(a) Allocating space for growing data segment


(b) Allocating space for two growing segments of a process
(stack & data segment) 13
Challenge – Memory Allocation
• Fixed partitions
• Variable/Dynamic partitions
Partitioning Strategies – Fixed
• Fixed Partitions – divide memory into equal sized
pieces (except for OS)
– Degree of multiprogramming = number of partitions
– Simple policy to implement
• All processes must fit into partition space
• Find any free partition and load the process
• Problem – what is the “right” partition size?
– Process size is limited
– Internal Fragmentation – unused memory in a
partition that is not available to other processes
Partitioning Strategies – Dynamic
• Idea: remove “wasted” memory that is not needed
in each partition
• Memory is dynamically divided into partitions
based on process needs
• Definition:
– Hole: a block of free or available memory
– Holes are scattered throughout physical memory
• New process is allocated memory from hole large
enough to fit it
Dynamic Partitions
• More complex management problem
 Must track free and used memory
 Need data structures to do tracking
 What holes are used for a process?
 External fragmentation
 Memory that is in holes too small to be usable by any process

OS OS OS

process 1 process 1 process 1

Process 2 process 4
process 2 Process 4
Terminates Starts
process 3 process 3 process 3
Memory Allocation – Mechanism
• MM system maintains data about free and allocated
memory alternatives
– Bit maps - 1 bit per “allocation unit”
– Linked Lists - free list updated and coalesced when not
allocated to a process
• At swap-in or process create
– Find free memory that is large enough to hold the process
– Allocate part (or all) of memory to process and mark
remainder as free
• Compaction
– Moving things around so that holes can be consolidated
– Expensive in OS time
Memory Management - Maps

• Part of memory with 5 processes, 3 holes


– tick marks show allocation units
– shaded regions are free
• Corresponding bit map
• Same information as a list
Memory Management with Linked Lists
• Keeping track of memory is to maintain a linked list
of allocated and free memory segments
• A segment either contains a process or is an empty
hole between two processes
• Each entry in the list specifies a hole (H) or process
(P), the address at which it starts, the length, and a
pointer to the next entry.
• In link list, the segment list is kept sorted by address
• Sorting this way has the advantage that when a
process terminates or is swapped out, updating the
list is straightforward.
20
Memory Management with Linked Lists

Four neighbor combinations for the terminating process, X.


Problems with Dynamic Partitions
• Fragmentation occurs due to creation and deletion of
memory space at load time

• Fragmentation is eliminated by relocation

• Relocation has to be fast to reduce overhead

• Free (unused) memory has to be managed

• Allocation of free memory is done by a placement


algorithm
3.1
Placement Algorithms(1)
• Operating system must decide which free block to
allocate to a process
• How to satisfy a request of size n from a list of
free holes
• First Fit
– Scans memory from the beginning and chooses the first
available block that is large enough
– Fastest
– May have many process loaded in the front end of
memory that must be searched over when trying to find
a free block

23
Placement Algorithms(1)
• Next-fit
– Scans memory from the location of the last
placement
– More often allocate a block of memory at the
end of memory where the largest block is
found
– The largest block of memory is broken up into
smaller blocks
– Compaction is required to obtain a large block
at the end of memory
24
Placement Algorithms(1)
• Best Fit
– Chooses the block that is closest in size to the request
– Worst performer overall
– Since smallest block is found for process, the
smallest amount of fragmentation is left
– Memory compaction must be done more often

• Worst Fit
– Always take the largest available hole, so that the
new hole will be big enough to be useful

25
Placement Algorithms(1)
• Quick Fit
– Maintains separate lists for some of the more common
sizes requested
– For example, it might have a table with n entries, in
which the first entry is a pointer to the head of a list of 4-
KB holes,
– The second entry is a pointer to a list of 8-KB holes, the
third entry a pointer to 12-KB holes, and so on.
– Holes of, say, 21 KB, could be put on either the 20-KB
list or on a special list of odd-sized holes.

26
Example Memory Configuration

Before and after allocation of 16Mbyte block


27

You might also like