0% found this document useful (0 votes)
36 views20 pages

Chapter 3 - Memory Management

Chapter Three discusses memory management, defining memory as a collection of data essential for executing programs. It highlights the need for memory management to optimize resource utilization, detailing techniques such as static and dynamic loading, swapping, and various allocation methods. Additionally, it addresses memory fragmentation issues, including internal and external fragmentation, which can hinder efficient memory use.

Uploaded by

Aymen Husen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views20 pages

Chapter 3 - Memory Management

Chapter Three discusses memory management, defining memory as a collection of data essential for executing programs. It highlights the need for memory management to optimize resource utilization, detailing techniques such as static and dynamic loading, swapping, and various allocation methods. Additionally, it addresses memory fragmentation issues, including internal and external fragmentation, which can hinder efficient memory use.

Uploaded by

Aymen Husen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 20

Chapter Three – Memory Management

Memory Management
 Understanding Memory

 What is Memory?
 Collection of data in a specific format.
 Stores instructions and processes data.
 Comprises a large array/group of words or bytes, each with its own
location.
 Primary Purpose: Execute programs.
 Programs and their data must reside in main memory during
execution.
 CPU fetches instructions from memory based on the Program Counter.
Memory Management

 The Need for Memory Management


 To achieve a degree of multiprogramming.
 To ensure proper utilization of memory resources.
 Many methods exist, reflecting various approaches.
 Effectiveness of algorithms depends on the situation.
Memory Management
 What is Main Memory? Main Memory (RAM)
 Central to the operation of a modern computer.
 Large array of words or bytes (hundreds of thousands to billions).
 Repository of rapidly available information.
 Shared by CPU and I/O devices.
 Where programs and information are kept for processor utilization.
 Associated with the processor for fast instruction/information
movement.
 Also known as RAM (Random Access Memory).
 Volatile: Loses data upon power interruption.
Memory Management
 What is Memory Management?
 Primarily involves the management of main memory.
 In multiprogramming, OS resides in one part, multiple
processes use the rest.
 Task: Subdividing memory among different processes.
 Manages operations between main memory and disk
during process execution.
 Main Aim: Achieve efficient utilization of memory.
Memory Management
 Why is Memory Management Required?
 Allocate and de-allocate memory before and after process
execution.
 Keep track of used memory space by processes.
 Minimize fragmentation issues.
 Ensure proper utilization of main memory.
 Maintain data integrity during process execution.
Logical vs. Physical Address Space
 Logical Address Space (Virtual Address):

 Address generated by the CPU.


 Represents the size of the process.
 Can change (relative to base).
 Physical Address Space (Real Address):

 Address seen by the memory unit (loaded into Memory Address Register).

 Set of all physical addresses corresponding to logical addresses.

 Computed by the Memory Management Unit (MMU).

 Always remains constant.

 MMU: Hardware device for run-time mapping from virtual to physical addresses.
Static vs. Dynamic Loading
 Program Loading Techniques:
 Loader: Program that loads a process into main memory.

 Static Loading:
 Loads the entire program into a fixed address at once.
 Requires more memory space (whole program must fit).

 Dynamic Loading:
 A routine is not loaded until it is called.
 All routines reside on disk in relocatable load format.
 Advantage: Unused routines are never loaded, efficient for large code.
 Size of process is not limited to physical memory size.
Static vs. Dynamic Loading
 Program Linking Techniques

 Linker: Combines object files (generated by compiler) into a single executable file.

 Static Linking:

 Linker combines all necessary program modules (including library routines) into one
executable.
 No runtime dependency on external libraries.

 Executable file size is larger.

 Dynamic Linking:

 Similar to dynamic loading.

 "Stub" included for each library routine reference.

 Stub checks if routine is in memory; if not, loads it.

 Reduces executable size and allows library updates without recompiling


Swapping
 Definition: Temporarily moving a process from main memory to secondary memory (disk)
and back.

 Purpose:

 Allows more processes to run than can fit simultaneously in memory.

 Handles higher priority processes: lower priority processes can be "swapped out"
(roll-out) to make room.
 Later, swapped back in (roll-in) to continue execution.

 Mechanism:

 Process moves to swap space (secondary memory).

 Main part of swapping is transfer time.

 Total time is directly proportional to the amount of memory swapped.

 Drawback: Involves slow disk I/O, impacting performance if excessive.


Memory Management Techniques

 Overview of Techniques
 Methods used by an operating system to efficiently allocate, utilize,
and manage memory resources.
 Ensure smooth program execution and optimal use of system
memory.
 Different approaches based on how memory is divided and allocated.
Memory Management Techniques
 Monoprogramming (Without Swapping)
 Simplest approach.
 Memory divided into two sections:
 Operating System part
 User Program part
 OS keeps track of the first and last location for user programs.
 OS loaded at the bottom or top (often low memory for interrupt
vectors).
 Data/code sharing is not a concern in a single-process
environment.
Memory Management Techniques
 Multiprogramming with Fixed Partitions
 Fixed Partitioning (Without Swapping)
 Introduced to support multiprogramming.

 Based on contiguous allocation.

 Characteristics:
 Memory partitioned into a fixed number of regions.

 Each partition is a fixed size block of contiguous memory.

 Each partition can hold exactly one process.

 Example: 5 regions (1 for OS, 4 for user programs).

 Management: OS uses a Partition Table to track status.


Contiguous Memory Allocation
 Each process is given a single, continuous block of memory.

 All data for a process is stored in adjacent memory locations.

 Simpler to manage initially.

 Methods:
 Fixed Partition Allocation: Memory divided into fixed-sized partitions at
system initialization.
 Dynamic Partition Allocation: Memory divided into variable-sized
partitions based on process size.
 When loading a process, OS decides which free block to allocate if multiple
exist.
Contiguous Memory Allocation
 Partition Allocation Methods (Placement Algorithms)
 Strategies for allocating free blocks in dynamic partitioning:
 First Fit: Allocates the first hole (free block) in memory that is large
enough to fit the process.
 Best Fit: Allocates the smallest hole that is large enough to fit the
process.
 Worst Fit: Allocates the largest hole available, leaving a large
remaining hole.
 Next Fit: Similar to First Fit, but starts searching from where the
previous search left off.
Non-Contiguous Memory Allocation
 Process is divided into smaller parts.

 These parts are stored in different, non-adjacent memory locations.

 The entire process does not need to be stored in one continuous block.

 Advantages: Eliminates external fragmentation, improves memory


utilization.
 Key Techniques:
 Paging
 Segmentation
Memory Fragmentation

 Occurs when processes are loaded and removed, creating small, unusable
"holes" in memory.
 These holes may be too small or non-contiguous to be allocated to new
processes.
 Two main types:
 Internal Fragmentation
 External Fragmentation
Memory Fragmentation
 Internal Fragmentation
 Definition:

 Occurs when memory blocks are allocated to a process more than its requested size.

 Result:

 Unused space is left over within the allocated block.

 Example (Fixed Partitioning):

 Process P4 (2MB) requests memory.

 Allocated a 3MB block.

 1MB block is wasted internally; cannot be allocated to other processes.


Memory Fragmentation
 External Fragmentation
 Definition: Occurs when there is enough total free memory, but it is broken
up into many small, non-contiguous blocks.
 Result:
 A new process cannot be loaded even if total free space is sufficient, because no single
continuous block is large enough.

 Example:

 Processes P1, P2, P3 allocated (e.g., 2MB in 3MB block, 4MB in 6MB
block, 7MB in 7MB block).
 Leaves 1MB and 2MB free (non-contiguous).
 New Process P4 (3MB) cannot be assigned, even though 3MB total free
Thank You !!

You might also like