0% found this document useful (0 votes)
18 views

Week 10 Os

Uploaded by

aniketmandre7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views

Week 10 Os

Uploaded by

aniketmandre7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

1.

Given memory partitions of 100k,500k,200k,300k and 600k (in order), how


would each of the First-fit, Best fit and worst –fit algorithms place processes
of 212k,417k,112k and 426 k(in order)? which algorithm makes the most
efficient use of the memory?

First-Fit Algorithm:

Process 212k: Placed in the 500k partition (remaining: 500k - 212k = 288k)
Process 417k: Placed in the 600k partition (remaining: 600k - 417k = 183k)
Process 112k: Placed in the 200k partition (remaining: 200k - 112k = 88k)
Process 426k: Cannot be placed in any available partition
Best-Fit Algorithm:

Process 212k: Placed in the 300k partition (remaining: 300k - 212k = 88k)
Process 417k: Placed in the 600k partition (remaining: 600k - 417k = 183k)
Process 112k: Placed in the 100k partition (remaining: 100k - 112k = -12k, with
internal fragmentation)
Process 426k: Cannot be placed in any available partition
Worst-Fit Algorithm:

Process 212k: Placed in the 600k partition (remaining: 600k - 212k = 388k)
Process 417k: Placed in the 600k partition (remaining: 600k - 417k = 183k)
Process 112k: Placed in the 500k partition (remaining: 500k - 112k = 388k)
Process 426k: Placed in the 600k partition (remaining: 600k - 426k = 174k)
Among the three algorithms, the Best-Fit algorithm makes the most efficient use
of memory. It selects the partition that minimizes internal fragmentation by
finding the closest fit to the size of the process.

2.Discuss in detail about

(i) Paging hardware - logical address to physical address mapping. (diagram)


(ii) TLB
(iii) External fragmentation with solution

(i) Paging Hardware - Logical Address to Physical Address Mapping:

Paging is a memory management scheme used by operating systems to handle virtual


memory. It divides the logical address space of a process into fixed-size units
called pages and physical memory into fixed-size blocks called frames. The
mapping between logical addresses and physical addresses is done using paging
hardware.

The logical address consists of two components: the page number and the offset
within the page. The page number is used as an index into the page table, which
is a data structure maintained by the operating system. The page table contains
the base address of each page in physical memory.

Here is a simplified diagram illustrating the logical address to physical


address mapping using paging hardware:

mathematica
Copy code
+------------------------+
| Page Table |
+------------------------+
| Page Number | Frame Number |
+------------------------+
| ... | ... |
+------------------------+
| Page Number | Frame Number |
+------------------------+

Logical Address Physical Address


+--------------+ +----------------+
| Page Number | | Frame Number |
+--------------+ +----------------+
| Offset | --> | Offset |
+--------------+ +----------------+
When a process generates a logical address, the page number is used as an index
into the page table to retrieve the corresponding frame number. The offset
remains unchanged since it represents the location within the page.

The frame number obtained from the page table is combined with the offset to
form the physical address, which represents the actual location in physical
memory where the data resides.

(ii) TLB (Translation Lookaside Buffer):

The Translation Lookaside Buffer (TLB) is a hardware cache that stores recently
used page table entries. Its purpose is to improve the efficiency of the address
translation process by reducing the number of memory accesses required.

The TLB is a small, high-speed memory located between the CPU and the page
table. It holds a subset of the page table entries that have been recently
accessed. Each entry in the TLB contains a page number, frame number, and
additional control bits.

When a process generates a logical address, the TLB is first checked. If the
page number is found in the TLB, it is referred to as a TLB hit, and the
corresponding frame number is retrieved. This avoids the need to access the page
table, saving time and improving performance.

If the page number is not found in the TLB, it is referred to as a TLB miss. In
this case, the page table must be accessed to retrieve the frame number, and the
TLB is updated with the new entry for future reference.

The TLB uses a cache replacement algorithm (such as Least Recently Used) to
manage its limited size. When the TLB becomes full, the least recently used
entry is evicted to make room for a new entry.

(iii) External Fragmentation with Solution:

External fragmentation is a memory management issue that occurs when free memory
blocks are scattered throughout the system, making it difficult to allocate
contiguous blocks of memory to satisfy process memory requests. It can lead to
inefficient memory utilization and can limit the ability to allocate larger
processes.

One solution to external fragmentation is memory compaction. Memory compaction


involves rearranging the allocated and free memory blocks to eliminate or reduce
fragmentation. The process of compaction involves moving processes in memory so
that free memory blocks are consolidated and form larger contiguous blocks.

Here is a step-by-step explanation of the memory compaction process:

Identify the locations of allocated and free memory blocks in the system.
Determine the total amount of free memory available.
Move the allocated processes as close together as possible, eliminating any gaps
between them.
Slide the free memory blocks towards one end of the memory, consolidating them
into a single

You might also like