0% found this document useful (0 votes)
3 views

Lecture8_OPSYS

The document discusses memory management in computing, highlighting the need for efficient memory allocation due to increasing application demands. It outlines the memory hierarchy, management requirements such as relocation, protection, and sharing, and various partitioning methods including fixed and dynamic partitioning. The document emphasizes the importance of optimizing memory usage to enhance CPU efficiency and reduce fragmentation.

Uploaded by

sudegun16
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Lecture8_OPSYS

The document discusses memory management in computing, highlighting the need for efficient memory allocation due to increasing application demands. It outlines the memory hierarchy, management requirements such as relocation, protection, and sharing, and various partitioning methods including fixed and dynamic partitioning. The document emphasizes the importance of optimizing memory usage to enhance CPU efficiency and reduce fragmentation.

Uploaded by

sudegun16
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 31

• Ideally programmers want memory that is

– large
– fast
– non volatile

• Memory hierarchy
– small amount of fast, expensive memory – cache
– some medium-speed, medium price main memory
– gigabytes of slow, cheap disk storage

• Memory manager handles the memory hierarchy


The need for memory management
• Memory is cheap today, and getting cheaper
– But applications are demanding more and more
memory, there is never enough!
• Memory Management, involves swapping
blocks of data from secondary storage.
• Memory I/O is slow compared to a CPU
– The OS must cleverly time the swapping to
maximise the CPU’s efficiency
Memory Management Requirements
• Relocation
• Protection
• Sharing
• Logical organisation
• Physical organisation
Requirements: Relocation
• The programmer does not know where the
program will be placed in memory when it is
executed,
– it may be swapped to disk and return to main
memory at a different location (relocated)
• Memory references must be translated to the
actual physical memory address
Relocation and Protection
• Cannot be sure where program will be loaded in memory
– address locations of variables, code routines cannot be
absolute
– must keep a program out of other processes’ partitions

• Use base and limit values


– address locations added to base value to map to physical
address
– address locations larger than limit value is an error

7
Addressing
Requirements: Protection
• Processes should not be able to reference
memory locations in another process without
permission
• Impossible to check absolute addresses at
compile time
• Must be checked at run time
Requirements: Sharing
• Allow several processes to access the same
portion of memory
• Better to allow each process access to the
same copy of the program rather than have
their own separate copy
Requirements: Logical Organization
• Memory is organized linearly (usually)
• Programs are written in modules
– Modules can be written and compiled
independently
• Different degrees of protection given to
modules (read-only, execute-only)
• Share modules among processes
• Segmentation helps here
Requirements: Physical Organization
• Cannot leave the programmer with the
responsibility to manage memory
• Memory available for a program plus its data
may be insufficient
– Overlaying allows various modules to be assigned
the same region of memory but is time consuming
to program
• Programmer does not know how much space
will be available
Basic Memory Management
Basic Memory Management
Monoprogramming without Swapping or Paging

Palmtops
Embedded
system

Three simple ways of organizing memory


- an operating system with one user process MS-DOS

14
Swapping (1)

Memory allocation changes as


– processes come into memory
– leave memory
Shaded regions are unused memory

15
Swapping (2)

• Allocating space for growing data segment


• Allocating space for growing stack & data segment
16
Partitioning
• An early method of managing memory
– Pre-virtual memory
– Not used much now
• But, it will clarify the later discussion of virtual
memory if we look first at partitioning
– Virtual Memory has evolved from the partitioning
methods
Types of Partitioning
• Fixed Partitioning
• Dynamic Partitioning
• Simple Paging
• Simple Segmentation
• Virtual Memory Paging
• Virtual Memory Segmentation
Fixed Partitioning
• Equal-size partitions
– Any process whose size is less than or
equal to the partition size can be
loaded into an available partition
• The operating system can swap a
process out of a partition
– If none are in a ready or running state
Fixed Partitioning Problems
• A program may not fit in a partition.
– The programmer must design the program with
overlays
• Main memory use is inefficient.
– Any program, no matter how small, occupies an
entire partition.
– This is results in internal fragmentation.
Solution – Unequal Size Partitions
• Lessens both problems
– but doesn’t solve completely
• In Figure,
– Programs up to 16M can be accommodated
without overlay
– Smaller programs can be placed in smaller
partitions, reducing internal fragmentation
Placement Algorithm
• Equal-size
– Placement is trivial (no options)
• Unequal-size
– Can assign each process to the smallest partition
within which it will fit
– Queue for each partition
– Processes are assigned in such a way as to
minimize wasted memory within a partition
Fixed Partitioning
Remaining Problems with Fixed
Partitions
• The number of active processes is limited by
the system
– I.E limited by the pre-determined number of
partitions
• A large number of very small process will not
use the space efficiently
– In either fixed or variable length partition methods
Dynamic Partitioning
• Partitions are of variable length and number
• Process is allocated exactly as much memory
as required
Dynamic Partitioning Example

OS (8M) OS (8M) OS (8M)


OS (8M)
P1
(20M)
P2
P1 P1 (14M)
(20M) (20M)
P2
Empty (6M)
(14M)
56M 56M
P4(8M) 56M
P4(8M)
56M P2
(14M)
P3 Empty (6M) Empty (6M)
(18M)
P3 P3 P3
P4(8M) (18M) (18M) (18M)

Empty (4M) Empty (4M) Empty (4M)

External Fragmentation P4(8M) P2


(14M) P1
Memory external to all processes is fragmented
(20M)
Can resolve using compaction
– OS moves processes so that they are contiguous
– Time consuming and wastes CPU time
Dynamic Partitioning
• Operating system must decide which free block to
allocate to a process
• Best-fit algorithm
– Chooses the block that is closest in size to the request
– Worst performer overall
– Since smallest block is found for process, the smallest
amount of fragmentation is left
– Memory compaction must be done more often
Dynamic Partitioning
• First-fit algorithm
– Scans memory form the beginning and chooses
the first available block that is large enough
– Fastest
– May have many process loaded in the front end of
memory that must be searched over when trying
to find a free block
Dynamic Partitioning
• Next-fit
– Scans memory from the location of the last placement
– More often allocate a block of memory at the end of
memory where the largest block is found
– The largest block of memory is broken up into smaller
blocks
– Compaction is required to obtain a large block at the
end of memory
Allocation

Example memory Configuration before and after of 16-Mbyte Block

You might also like