OS
OS
The operating system provides the means for proper use of the resources in the
operation of the computer system.
OS goals –
Maximum CPU utilization
Less process starvation
Efficient resource management
Functions of OS:
1)Process Management: Manages processes, including creation, scheduling,
and termination. It ensures efficient CPU usage and handles multitasking by
allowing multiple processes to run concurrently.
User Interface (UI): Provides an interface for user interaction with the system,
such as command-line interfaces (CLI) or graphical user interfaces (GUI).
Types of OS:
Single process OS, only 1 process executes at a time from the ready queue.
[Oldest]
Batch-processing OS:
Multi programming OS:
Multitasking os:
Multithreading vs multiprocessing(multi tasking)
System Calls:
Kernel mode->0
User mode ->1
Power goes to all, CPU turns on->BIOS(Basic input output system) a small
program stored in the motherboard check all the i/p sources ->bootloader
program in disk is called ->kernal and all os
A 32-bit OS has 32-bit registers, and it can access 2^32 unique memory
addresses. i.e., 4GB of physical memory.
A 64-bit OS has 64-bit registers, and it can access 2^64 unique memory
addresses. i.e., 17,179,869,184 GB of physical memory.
Process-pcb,state,queues:
Scheduling terms:
Priority Scheduling
RR (Round Robin)
1. Mutual Exclusion: If one process is in its critical section, no other process
can enter its critical section simultaneously.
2. Progress: If no process is in its critical section, the decision of which
process will enter next should be made by the processes not in their
remainder sections, but only processes which are wanting to enter should
do.
3. Bounded Waiting: There must be a limit on the number of times other
processes can enter their critical sections before a process's request to
enter is granted
Solutions:
What is semaphore:
Disadvantages: busy wait, deadlocks and starvation
bounded buffer
● Semaphores:
○ mutex: Ensures mutual exclusion for accessing the buffer.
○ empty: Tracks the number of empty slots in the buffer.
○ full: Tracks the number of filled slots.
● How it works:
○ Producer: It waits if the buffer is full, locks the buffer, adds an item,
and signals that a slot is now full.
○ Consumer: It waits if the buffer is empty, locks the buffer, removes
an item, and signals that a slot is now empty.
This ensures that the buffer is not overfilled or underutilized and that only one
process accesses the buffer at a time.
Reader writer:
This problem involves multiple readers and writers accessing shared data.
Readers can read simultaneously, but writers need exclusive access.
● Semaphores:
○ mutex: Ensures mutual exclusion while updating the reader count.
○ wrt: Ensures exclusive access for writers.
● How it works:
○ Readers: Multiple readers can read at the same time, but the first
reader locks the writer, and the last reader unlocks it.
○ Writers: A writer has exclusive access, meaning no other readers or
writers can access the shared resource while it’s writing.
This ensures that writers have exclusive access to modify data, and multiple
readers can access it concurrently when no writer is active.
DINING PHILOSOPHER
Semaphores:
How it Works:
1. Philosopher:
○ When a philosopher wants to eat, they will try to pick up the left and
right forks (semaphores).
○ They wait (decrement) the semaphore for the left fork and then the
right fork. If either fork is unavailable (i.e., the semaphore is 0), they
will not proceed.
○ Once both forks are acquired, the philosopher eats.
○ After eating, the philosopher releases (increments) both forks’
semaphores, making them available for others.
Key Points:
● This approach ensures that no two philosophers can pick up the same fork
simultaneously, preventing deadlocks.
● By ensuring that each philosopher can only pick up their left fork first, it
avoids the circular wait condition, which is essential in preventing
deadlock.
DEADLOCK:
What is deadlock:
Avoidance:
Idea is, the kernel be given in advance info concerning which resources
will use in its lifetime.
By this, system can decide for each request whether the process should
wait.
To decide whether the current request can be satisfied or delayed, the
system must consider the resources currently available, resources currently
allocated to each process in the system and the future requests and
releases of each process. The Banker's algorithm is a well-known method
where processes declare their maximum resource needs in advance, and
the system checks whether granting a resource keeps the system in a safe
state.
Defragmentation/Compaction:
● First Fit: Allocates the first available block/hole that fits the request. Simple
and fast.
● Next Fit: Similar to First Fit but starts searching from the last allocated
block.
● Best Fit: Allocates the smallest block that fits. Reduces internal
fragmentation but may increase external fragmentation.
● Worst Fit: Allocates the largest block available. Helps avoid small
fragments but may leave large unused holes.
Non Continuous-paging:
Paging?
Paging is a non continuous memory management technique to manage how data
is stored and retrieved from the RAM. It divides the program's logical memory
into fixed-size blocks called pages and the physical memory into frames of the
same size. When a program is executed, its pages are loaded into any available
frames in memory and the mapping is done through page table.
Paging:
● Divides memory into equal-size frames and logical memory into pages.
● Page Table maps pages to frames, containing the base address of each
page.
Segmentation:
Segmentation is a memory management technique where the logical address
space is divided into variable-sized segments, each representing a different
logical part of a program, such as a function, array, or data structure. Each
segment is identified by a segment number and an offset within that segment,
represented as <segment-number, offset>.
The key difference between segmentation and paging is that segmentation
aligns with the user’s logical view of memory, grouping related parts of the
program together (like keeping functions in the same segment). In contrast,
paging divides memory into fixed-size pages without regard to the logical
structure, making it more of an OS-level abstraction.
This allows for better memory organization while still leveraging the advantages
of paging.
Virtual Memory:
Virtual Memory is a memory management technique that allows a process to
run even if it is not entirely in the physical memory. It creates the illusion of a
larger main memory by using a portion of the secondary storage (swap space)
as an extension of RAM.
Virtual memory allows programs to exceed the size of the available physical
memory. Instead of loading the entire program into RAM, only the necessary
parts (pages) are brought into memory as needed, through a process called
demand paging.
When a program accesses a page that is not in memory, a page fault occurs,
and the operating system loads the required page from secondary storage into
RAM. This allows the system to execute larger programs and increases CPU
utilization by running more processes concurrently, as each one takes less
physical memory.
This demand paging mechanism optimizes both memory usage and swap time,
allowing efficient execution of multiple processes.
Page Replacement Algorithms
Introduction:
Page replacement algorithms are essential for managing memory when a page
fault occurs, which happens when a process tries to access a page not currently
in physical memory. The OS must replace an existing page to free space for the
new one from secondary storage (swap space).
Characteristics:
● High Paging Activity: The system dedicates more time to handling page
faults than to actual processing, significantly degrading performance.
● Consequences: Continuous replacement of needed pages causes delays
and inefficiencies.
Buffering:
Caching: