0% found this document useful (0 votes)
12 views

OS

The document provides an overview of operating systems (OS), detailing their functions, goals, and types. Key functions include process management, memory management, file system management, device management, security, and user interface. It also discusses various scheduling algorithms, synchronization issues, deadlock handling, memory management techniques, and page replacement algorithms.

Uploaded by

icoanurag
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

OS

The document provides an overview of operating systems (OS), detailing their functions, goals, and types. Key functions include process management, memory management, file system management, device management, security, and user interface. It also discusses various scheduling algorithms, synchronization issues, deadlock handling, memory management techniques, and page replacement algorithms.

Uploaded by

icoanurag
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

Intro to OS-Functions of Os​ ​ ​ ​

Application software performs specific task for the user.​


System software operates and controls the computer system and provides a
platform to run application software.

​ ​ ​ ​ ​ ​ ​
The operating system provides the means for proper use of the resources in the
operation of the computer system.​ ​ ​ ​ ​ ​
OS goals –​ ​
Maximum CPU utilization
Less process starvation
Efficient resource management
Functions of OS:
1)Process Management: Manages processes, including creation, scheduling,
and termination. It ensures efficient CPU usage and handles multitasking by
allowing multiple processes to run concurrently.

Memory Management: Allocates and manages the system's memory (RAM). It


tracks each byte in memory, manages multiple processes in memory, and
handles memory swapping between disk and RAM.

File System Management: Manages files and directories, providing a way to


store, retrieve, and organize data on storage devices. It ensures data integrity
and security.

Device Management: Controls and manages hardware devices (e.g., printers,


hard drives, I/O devices). It uses device drivers to facilitate communication
between the hardware and the OS.
Security & Access Control: Protects system resources (files, processes,
memory) by managing user permissions and preventing unauthorized access to
data.

User Interface (UI): Provides an interface for user interaction with the system,
such as command-line interfaces (CLI) or graphical user interfaces (GUI).

Types of OS:​ ​ ​ ​ ​ ​
Single process OS, only 1 process executes at a time from the ready queue.
[Oldest]

Batch-processing OS:​ ​ ​ ​ ​ ​ ​ ​

​ ​
Multi programming OS:​ ​ ​ ​ ​ ​ ​
Multitasking os:​ ​ ​

​ ​ ​ ​

​ ​ ​

​ ​ ​ ​ ​ ​ ​
Multithreading vs multiprocessing(multi tasking)​
​ ​
​ ​

Components of os->kernel(monolith, micro, hybrid), user space


and shell
​ ​

System Calls:

Kernel mode->0
User mode ->1
​ ​ ​ ​ ​ ​

turn on your computer?

Power goes to all, CPU turns on->BIOS(Basic input output system) a small
program stored in the motherboard check all the i/p sources ->bootloader
program in disk is called ->kernal and all os
​ ​ ​ ​ ​ ​ ​ ​
A 32-bit OS has 32-bit registers, and it can access 2^32 unique memory
addresses. i.e., 4GB of physical memory.​ ​ ​ ​ ​ ​
​ ​ ​ ​ ​ ​ ​ ​
A 64-bit OS has 64-bit registers, and it can access 2^64 unique memory
addresses. i.e., 17,179,869,184 GB of physical memory. ​
​ ​ ​ ​ ​ ​ ​
​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​
Process-pcb,state,queues:
Scheduling terms:

FCFS (First Come First Serve)

●​ Working: Executes processes in the order they arrive.


●​ Advantage: Simple and easy to implement.
●​ Disadvantage: High waiting time, leading to the "convoy effect."

SJF (Shortest Job First)

●​ Premptive: Shortest remaining time first (preempts if a new process has a


shorter burst time).
●​ Non-Premptive: Executes the shortest burst time process until completion.
●​ Advantage: Minimizes average waiting time.
●​ Disadvantage: Requires knowledge of burst times; prone to starvation.

Priority Scheduling

●​ Premptive: Preempts the running process if a higher-priority process


arrives.
●​ Non-Premptive: Executes processes based on priority without
interruption.
●​ Advantage: Ensures important tasks are handled first.
●​ Disadvantage: Can cause starvation for low-priority processes.

RR (Round Robin)

●​ Working: Time slices (quantum) are given to each process in a cyclic


order.
●​ Advantage: Fair, ensures no process starves.
●​ Disadvantage: High context switching, can reduce efficiency.

MLQ (Multi-Level Queue)

●​ Working: Processes are divided into different queues based on priority,


with each queue having its own scheduling algorithm.
●​ Advantage: Organizes and separates processes by priority.
●​ Disadvantage: Rigid structure, no dynamic adjustments between queues.

FMLQ (Feedback Multi-Level Queue)

●​ Working: Processes can move between queues based on behavior and


time spent waiting.
●​ Advantage: Flexible, dynamically adjusts priorities based on process
needs.
●​ Disadvantage: Complex to implement and manage.
Sychronization-C.S,Solutions

three requirements for a solution to the Critical-Section Problem:

1.​ Mutual Exclusion: If one process is in its critical section, no other process
can enter its critical section simultaneously.
2.​ Progress: If no process is in its critical section, the decision of which
process will enter next should be made by the processes not in their
remainder sections, but only processes which are wanting to enter should
do.
3.​ Bounded Waiting: There must be a limit on the number of times other
processes can enter their critical sections before a process's request to
enter is granted​

Solutions:​ ​ ​

As Peterson algo is a HUMBLE algo even though pi is waiting ie flag p[i]=True


but it gives opportunity to pj ie turn =Pj
Test and set locks.​​
Semaphore-

What is semaphore:​ ​ ​
Disadvantages: busy wait, deadlocks and starvation

bounded buffer

Semaphore Solution for Bounded Buffer Problem (Producer-Consumer):

In this problem, a fixed-size buffer is shared between a producer and a


consumer. The producer adds items, and the consumer removes them. We use
semaphores to control access.

●​ Semaphores:
○​ mutex: Ensures mutual exclusion for accessing the buffer.
○​ empty: Tracks the number of empty slots in the buffer.
○​ full: Tracks the number of filled slots.
●​ How it works:
○​ Producer: It waits if the buffer is full, locks the buffer, adds an item,
and signals that a slot is now full.
○​ Consumer: It waits if the buffer is empty, locks the buffer, removes
an item, and signals that a slot is now empty.

This ensures that the buffer is not overfilled or underutilized and that only one
process accesses the buffer at a time.

Reader writer:

Semaphore Solution for Reader-Writer Problem:

This problem involves multiple readers and writers accessing shared data.
Readers can read simultaneously, but writers need exclusive access.

●​ Semaphores:
○​ mutex: Ensures mutual exclusion while updating the reader count.
○​ wrt: Ensures exclusive access for writers.
●​ How it works:
○​ Readers: Multiple readers can read at the same time, but the first
reader locks the writer, and the last reader unlocks it.
○​ Writers: A writer has exclusive access, meaning no other readers or
writers can access the shared resource while it’s writing.

This ensures that writers have exclusive access to modify data, and multiple
readers can access it concurrently when no writer is active.

DINING PHILOSOPHER

Dining Philosophers Problem

In this problem, a fixed number of philosophers sit at a table, alternating between


thinking and eating. Each philosopher needs two forks to eat, but there are only five
forks available, leading to potential deadlocks. We use semaphores to manage access
to the forks.

Semaphores:

●​ mutex: Ensures mutual exclusion for accessing the forks.


●​ forks: An array of binary semaphores, each representing a fork, initialized
to 1 (indicating that the fork is available).

How it Works:
1.​ Philosopher:
○​ When a philosopher wants to eat, they will try to pick up the left and
right forks (semaphores).
○​ They wait (decrement) the semaphore for the left fork and then the
right fork. If either fork is unavailable (i.e., the semaphore is 0), they
will not proceed.
○​ Once both forks are acquired, the philosopher eats.
○​ After eating, the philosopher releases (increments) both forks’
semaphores, making them available for others.

Key Points:
●​ This approach ensures that no two philosophers can pick up the same fork
simultaneously, preventing deadlocks.
●​ By ensuring that each philosopher can only pick up their left fork first, it
avoids the circular wait condition, which is essential in preventing
deadlock.
DEADLOCK:

What is deadlock:​ ​

​ ​ ​ ​ ​ ​

Methods for handling Deadlocks:

Prevention: This approach ensures that at least one of the necessary


conditions for a deadlock cannot hold. We can prevent mutual exclusion by
making resources sharable, eliminate hold and wait by requiring processes
to request all resources at once, ensure no preemption by allowing
resources to be forcibly taken from processes, or avoid circular wait by
imposing a strict ordering on resource allocation.

Avoidance: ​ ​ ​ ​ ​ ​ ​ ​

Idea is, the kernel be given in advance info concerning which resources
will use in its lifetime.​
By this, system can decide for each request whether the process should
wait.​
To decide whether the current request can be satisfied or delayed, the
system must consider the resources currently available, resources currently
allocated to each process in the system and the future requests and
releases of each process. The Banker's algorithm is a well-known method
where processes declare their maximum resource needs in advance, and
the system checks whether granting a resource keeps the system in a safe
state.

Detection: In this method, the system allows deadlocks to occur but


detects them afterward. A detection algorithm periodically checks the state
of resource allocation and can identify cycles in the resource allocation
graph. Once detected, the system can take action, like terminating
processes or preempting resources.

Bankers algo can be used to detect.

Recovery: After detecting a deadlock, recovery strategies can be applied.


This might involve terminating one or more processes or rolling back
processes to a safe state to break the deadlock.

Ignorance: Sometimes, systems may simply ignore deadlocks, especially if


they are rare and the cost of prevention or detection outweighs the
potential harm caused by a deadlock.(Ostrich algo)
Memory Management Techniques​

​ ​

​ Continuous mem allocation-partioning ​ ​


​ ​ ​
​ ​ Continuous Memory Allocation refers to allocating memory in a
single contiguous block. Two common types are Fixed Partitioning and
Dynamic Partitioning.

1.​ Fixed Partitioning:


○​ What it is: Memory is divided into fixed-size partitions, regardless of
the program's size.
○​ Limitation: Leads to internal fragmentation (wasted space within
partitions if a process is smaller than the partition). Also, inflexible for
varying process sizes.
2.​ Dynamic Partitioning:
○​ What it is: Memory is allocated dynamically based on process size,
without fixed partitions.
○​ Limitation: Causes external fragmentation (free memory is
scattered in small blocks), making it hard to allocate new processes
even if enough total memory exists​ ​ ​ ​ ​ ​

Free Space Management

​ Defragmentation/Compaction:

●​ Addresses external fragmentation in dynamic partitioning.


●​ Combines free partitions into a single contiguous block, allowing larger
processes to be allocated.
●​ Compaction reduces system efficiency due to memory movement.

Free Space Representation:

●​ Free memory is stored as a free list using a linked-list data structure.

Memory Allocation Algorithms:

●​ First Fit: Allocates the first available block/hole that fits the request. Simple
and fast.
●​ Next Fit: Similar to First Fit but starts searching from the last allocated
block.
●​ Best Fit: Allocates the smallest block that fits. Reduces internal
fragmentation but may increase external fragmentation.
●​ Worst Fit: Allocates the largest block available. Helps avoid small
fragments but may leave large unused holes.

Non Continuous-paging:
Paging?
Paging is a non continuous memory management technique to manage how data
is stored and retrieved from the RAM. It divides the program's logical memory
into fixed-size blocks called pages and the physical memory into frames of the
same size. When a program is executed, its pages are loaded into any available
frames in memory and the mapping is done through page table.

Problem with Dynamic Partitioning:

●​ External fragmentation is the main issue.


●​ Compaction can solve it but with overhead.

Idea Behind Paging:

●​ Allows non-contiguous allocation by dividing memory into fixed-size blocks


called Pages.
●​ Avoids external fragmentation by breaking memory into pages (logical) and
Frames (physical).

Paging:

●​ Divides memory into equal-size frames and logical memory into pages.
●​ Page Table maps pages to frames, containing the base address of each
page.

How Paging Avoids External Fragmentation:

●​ Non-contiguous allocation of process pages in available frames.

Why Paging is Slow:

●​ Multiple memory accesses for page table lookups.

Translation Lookaside Buffer (TLB):

●​ A hardware cache that speeds up paging by storing key-value page table


entries for faster access.

Segmentation:
Segmentation is a memory management technique where the logical address
space is divided into variable-sized segments, each representing a different
logical part of a program, such as a function, array, or data structure. Each
segment is identified by a segment number and an offset within that segment,
represented as <segment-number, offset>.
The key difference between segmentation and paging is that segmentation
aligns with the user’s logical view of memory, grouping related parts of the
program together (like keeping functions in the same segment). In contrast,
paging divides memory into fixed-size pages without regard to the logical
structure, making it more of an OS-level abstraction.

Advantages of segmentation include no internal fragmentation and more


efficient execution within segments. However, it can lead to external
fragmentation, where free memory is scattered, and variable segment sizes
complicate swapping.

Modern systems often use a hybrid approach, combining both segmentation


and paging to balance efficiency and flexibility.

This allows for better memory organization while still leveraging the advantages
of paging.

Virtual Memory:
Virtual Memory is a memory management technique that allows a process to
run even if it is not entirely in the physical memory. It creates the illusion of a
larger main memory by using a portion of the secondary storage (swap space)
as an extension of RAM.

Virtual memory allows programs to exceed the size of the available physical
memory. Instead of loading the entire program into RAM, only the necessary
parts (pages) are brought into memory as needed, through a process called
demand paging.

When a program accesses a page that is not in memory, a page fault occurs,
and the operating system loads the required page from secondary storage into
RAM. This allows the system to execute larger programs and increases CPU
utilization by running more processes concurrently, as each one takes less
physical memory.

The OS uses page replacement algorithms to decide which pages to evict


when memory is full. Additionally, a lazy swapper is used, meaning pages are
only loaded when required, minimizing unnecessary memory usage.

To track pages, a valid-invalid bit is used in the page table. If a page is in


memory, the bit is set to 1 (valid); otherwise, it’s set to 0 (invalid), indicating the
page is either on disk or outside the process's address space.

This demand paging mechanism optimizes both memory usage and swap time,
allowing efficient execution of multiple processes.
Page Replacement Algorithms
Introduction:​
Page replacement algorithms are essential for managing memory when a page
fault occurs, which happens when a process tries to access a page not currently
in physical memory. The OS must replace an existing page to free space for the
new one from secondary storage (swap space).

1.​ First-In-First-Out (FIFO)


○​ Mechanism: Replaces the oldest page in memory.
○​ Advantages: Simple to implement.
○​ Disadvantages: Can lead to inefficiencies and Belady’s anomaly,
where increasing the number of frames can increase page faults.
2.​ Optimal Page Replacement(Far used)
○​ Mechanism: Replaces the page that will not be needed in the future
or the one referenced farthest away.
○​ Advantages: Lowest page fault rate.
○​ Disadvantages: Difficult to implement due to the need for future
knowledge of requests.
3.​ Least Recently Used (LRU)
○​ Mechanism: Replaces the page that hasn't been used for the
longest time, based on the assumption that recent usage is a good
predictor of future needs.
○​ Implementation: Can use counters or stacks to track usage.
○​ Advantages: More effective than FIFO and adapts to usage
patterns.
4.​ Counting-Based Page Replacement
○​ Mechanism: Maintains a count of references for each page.
○​ Types:
■​ Least Frequently Used (LFU): Replaces the page with the
lowest count.
■​ Most Frequently Used (MFU): Assumes the page with the
smallest count is new and hasn’t been used much.
○​ Disadvantages: Complexity and overhead make these methods
less common.

Thrashing in Operating Systems


Definition: Thrashing occurs when a process lacks sufficient memory frames for
its active pages, leading to a high frequency of page faults. This results in
excessive time spent servicing faults rather than executing processes.

Characteristics:

●​ High Paging Activity: The system dedicates more time to handling page
faults than to actual processing, significantly degrading performance.
●​ Consequences: Continuous replacement of needed pages causes delays
and inefficiencies.

Techniques to Mitigate Thrashing

1.​ Working Set Model:


○​ Based on the Locality Model, it asserts that if enough frames are
allocated to accommodate a process’s working set, faults will only
occur when the process shifts to a new locality. Insufficient frames
lead to thrashing.
2.​ Page Fault Frequency Control:
○​ Monitor the page-fault rate:
■​ Upper Limit: If the rate exceeds this, allocate more frames.
■​ Lower Limit: If it falls below this, reduce the number of
frames.
○​ Objective: By maintaining the page-fault rate within these bounds,
we can prevent thrashing and ensure smoother execution.

Spooling vs Buffering vs Caching


Definition: Spooling (Simultaneous Peripheral Operation On-Line) is the process
of temporarily storing data in a queue (spool) for later processing by another
program. It manages access to shared resources (like printers) by allowing
multiple processes to send data to the spool simultaneously, which is then
processed in a First In, First Out (FIFO) order.

Buffering:

Definition: Buffering is the technique of temporarily storing data in a reserved


area of memory (the buffer) to accommodate speed differences between input
and output devices. It allows data to be read or written at varying rates, thus
smoothing out the transfer process and preventing delays during communication
between devices (e.g., between a fast CPU and a slower disk drive).

Caching:

Definition: Caching is the process of storing frequently accessed data in a


high-speed storage component (cache) to improve retrieval times. By keeping
copies of data that are likely to be reused, caching reduces the need to access
slower underlying storage, significantly enhancing performance for applications
such as web browsers, databases, and operating systems.

​ ​ ​ ​ ​
​ ​ ​ ​ ​ ​

​ ​ ​ ​ ​
​ ​ ​ ​
​ ​ ​
​ ​

​ ​ ​ ​ ​
​ ​ ​ ​
​ ​ ​
​ ​

You might also like