0% found this document useful (0 votes)
12 views

Short Notes Operating Systems

The document discusses the key concepts of operating systems including definitions, goals, components, functions like process management, memory management, file management and more. It also covers the evolution of operating systems from mainframe to batch processing to multiprogramming and time-sharing systems.

Uploaded by

tamannayadav741
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Short Notes Operating Systems

The document discusses the key concepts of operating systems including definitions, goals, components, functions like process management, memory management, file management and more. It also covers the evolution of operating systems from mainframe to batch processing to multiprogramming and time-sharing systems.

Uploaded by

tamannayadav741
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Operating System Jhulan Kumar

Lovely Professional University (LPU)


Subject: - Operating System (OS)
Notes: - Unit 1, Unit 2 & Unit 3
Jhulan Kumar
Assistant Professor
Department of
School of Computer Science and
Engineering, LPU

2
Operating System Jhulan Kumar

A Short Notes of Operating System

What is an Operating System?


• A program that acts as an intermediary between a user of a computer and the computer
hardware.
• An operating System is a collection of system programs that together control the
operations of a computer system.
Some examples of operating systems are UNIX, Mach, MS-DOS, MS Windows, Windows/NT,
Chicago, OS/2, MacOS, VMS, MVS, and VM.
Operating system goals:
• Execute user programs and make solving user problems easier.
• Make the computer system convenient to use.
• Use the computer hardware in an efficient manner.
Computer System Components
1. Hardware – provides basic computing resources (CPU, memory, I/O devices).
2. Operating system – controls and coordinates the use of the hardware among the various
application programs for the various users.
3. Applications programs – Define the ways in which the system resources are used to solve the
computing problems of the users (compilers, database systems, video games, business
programs).
4. Users (people, machines, other computers).
Abstract View of System Components

Operating System Definitions


Resource allocator – manages and allocates resources.
Control program – controls the execution of user programs and operations of I/O devices.
Kernel – The one program running at all times (all else being application programs).
Components of OS: OS has two parts. (1) Kernel. (2)Shell.
(1) Kernel is an active part of an OS i.e., it is the part of OS running at all times. It is a program
which can interact with the hardware. Ex: Device driver, dll files, system files etc.
(2) Shell is called as the command interpreter. It is a set of programs used to interact with the
application programs. It is responsible for execution of instructions given to OS (called
commands).
Operating systems can be explored from two viewpoints: the user and the system.
User View: From the user’s point view, the OS is designed for one user to monopolize its resources,
to maximize the work that the user is performing and for ease of use.
System View: From the computer's point of view, an operating system is a control program that
manages the execution of user programs to prevent errors and improper use of the computer.

3
Operating System Jhulan Kumar
It is concerned with the operation and control of I/O devices.

Functions of Operating System:


Process Management
A process is a program in execution. A process needs certain resources, including CPU time,
memory, files, and I/O devices, to accomplish its task.
The operating system is responsible for the following activities in connection with process
management.
✦ Process creation and deletion.

✦ process suspension and resumption.

✦ Provision of mechanisms for:


• process synchronization
• process communication
Main-Memory Management
Memory is a large array of words or bytes, each with its own address. It is a repository of quickly
accessible data shared by the CPU and I/O devices.
Main memory is a volatile storage device. It loses its contents in the case of system failure.
The operating system is responsible for the following activities in connections with memory
management:
• Keep track of which parts of memory are currently being used and by whom.
• Decide which processes to load when memory space becomes available.
• Allocate and de-allocate memory space as needed.
File Management
A file is a collection of related information defined by its creator. Commonly, files represent
programs (both source and object forms) and data.
The operating system is responsible for the following activities in connections with file
management:
✦ File creation and deletion.

✦ Directory creation and deletion.

✦ Support of primitives for manipulating files and directories.

✦ Mapping files onto secondary storage.

✦ File backup on stable (nonvolatile) storage media.


I/O System Management
The I/O system consists of:
✦A buffer-caching system

✦ A general device-driver interface

✦ Drivers for specific hardware devices


Secondary-Storage Management
Since main memory (primary storage) is volatile and too small to accommodate all data and
4
Operating System Jhulan Kumar
programs permanently, the computer system must provide secondary storage to back up main
memory.
Most modern computer systems use disks as the principle on-line storage medium, for both
programs and data. The operating system is responsible for the following activities in
connection with disk management:
✦ Free space management

✦ Storage allocation

✦ Disk scheduling
Networking (Distributed Systems)
• A distributed system is a collection processor that do not share memory or a clock. Each
processor has its own local memory.
• The processors in the system are connected through a communication network.
• Communication takes place using a protocol.
• A distributed system provides user access to various system resources.
• Access to a shared resource allows:
✦ Computation speed-up

✦ Increased data availability

✦ Enhanced reliability
Protection System
• Protection refers to a mechanism for controlling access by programs, processes, or users to
both system and user resources.
The protection mechanism must:
✦ distinguish between authorized and unauthorized usage.

✦ specify the controls to be imposed.

✦ provide a means of enforcement.


Command-Interpreter System
• Many commands are given to the operating system by control statements which deal with:
✦ process creation and management

✦ I/O handling

✦ secondary-storage management

✦ main-memory management

✦ file-system access

✦ protection

✦ networking
• The program that reads and interprets control statements is called variously:
✦ command-line interpreter

✦ shell (in UNIX)


• Its function is to get and execute the next command statement.

5
Operating System Jhulan Kumar

Operating-System Structures
• System Components
• Operating System Services
• System Calls
• System Programs
• System Structure
• Virtual Machines
• System Design and Implementation
• System Generation
Common System Components
• Process Management
• Main Memory Management
• File Management
• I/O System Management
• Secondary Management
• Networking
• Protection System
• Command-Interpreter System

Evolution of OS:
1. Mainframe Systems
Reduce setup time by batching similar jobs Automatic job sequencing – automatically transfers
control from one job to another. First rudimentary
operating system. Resident monitor
• initial control in monitor
• control transfers to job
• when job completes control transfers pack to monitor
2. Batch Processing Operating System:
• This type of OS accepts more than one jobs and these jobs are batched/ grouped together
according to their similar requirements. This is done by computer operator. Whenever the
computer becomes available, the batched jobs are sent for execution and gradually the
output is sent back to the user.
• It allowed only one program at a time.
• This OS is responsible for scheduling the jobs according to priority and the resource
required.
3. Multiprogramming Operating System:
• This type of OS is used to execute more than one jobs simultaneously by a single
processor. it increases CPU utilization by organizing jobs so that the CPU always has one
job to execute.
6
Operating System Jhulan Kumar
• The concept of multiprogramming is described as follows:
• All the jobs that enter the system are stored in the job pool( in disc). The operating system
loads a set of jobs from job pool into main memory and begins to execute.
• During execution, the job may have to wait for some task, such as an I/O operation, to
complete. In a multiprogramming system, the operating system simply switches to another
job and executes. When that job needs to wait, the CPU is switched to another job, and so
on.
• When the first job finishes waiting and it gets the CPU back.
• As long as at least one job needs to execute, the CPU is never idle.
• Multiprogramming operating systems use the mechanism of job scheduling and CPU
scheduling.
3. Time-Sharing/multitasking Operating Systems
Time sharing (or multitasking) OS is a logical extension of multiprogramming. It
provides extra facilities such as:
• Faster switching between multiple jobs to make processing faster.
• Allows multiple users to share computer system simultaneously.
• The users can interact with each job while it is running.
These systems use a concept of virtual memory for effective utilization of memory space. Hence,
in this OS, no jobs are discarded. Each one is executed using virtual memory concept. It uses
CPU scheduling, memory management, disc management and security management.
Examples: CTSS, MULTICS, CAL, UNIX etc.
4. Multiprocessor Operating Systems
Multiprocessor operating systems are also known as parallel OS or tightly coupled OS. Such
operating systems have more than one processor in close communication that sharing the
computer bus, the clock and sometimes memory and peripheral devices. It executes multiple
jobs at same time and makes the processing faster.
Multiprocessor systems have three main advantages:
• Increased throughput: By increasing the number of processors, the system performs more
work in less time. The speed-up ratio with N processors is less than N.
• Economy of scale: Multiprocessor systems can save more money than multiple single-
processor systems, because they can share peripherals, mass storage, and power supplies.
• Increased reliability: If one processor fails to done its task, then each of the remaining
processors must pick up a share of the work of the failed processor. The failure of one
processor will not halt the system, only slow it down.

The ability to continue providing service proportional to the level of surviving hardware
is called graceful degradation. Systems designed for graceful degradation are called
fault tolerant.

7
Operating System Jhulan Kumar

The multiprocessor operating systems are classified into two categories:


1. Symmetric multiprocessing system
2. Asymmetric multiprocessing system
• In symmetric multiprocessing system, each processor runs an identical copy of the
operating system, and these copies communicate with one another as needed.
• In asymmetric multiprocessing system, a processor is called master processor that controls
other processors called slave processor. Thus, it establishes master-slave relationship. The
master processor schedules the jobs and manages the memory for entire system.
5. Distributed Operating Systems
• In distributed system, the different machines are connected in a network and each machine
has its own processor and own local memory.
• In this system, the operating systems on all the machines work together to manage the
collective network resource.
• It can be classified into two categories:
1. Client-Server systems
2. Peer-to-Peer systems
Advantages of distributed systems.
• Resources Sharing
• Computation speed up – load sharing
• Reliability
• Communications
• Requires networking infrastructure.
• Local area networks (LAN) or Wide area networks (WAN)
.
6. Desktop Systems/Personal Computer Systems
• The PC operating system is designed for maximizing user convenience and responsiveness.
This system is neither multi-user nor multitasking.
• These systems include PCs running Microsoft Windows and the Apple Macintosh. The MS-
DOS operating system from Microsoft has been superseded by multiple flavors of Microsoft
Windows and IBM has upgraded MS-DOS to the OS/2 multitasking system.
• The Apple Macintosh operating system has been ported to more advanced hardware, and
now includes new features such as virtual memory and multitasking.
7. Real-Time Operating Systems (RTOS)
• A real-time operating system (RTOS) is a multitasking operating system intended for
applications with fixed deadlines (real-time computing). Such applications include some
small embedded systems, automobile engine controllers, industrial robots, spacecraft,
industrial control, and some large-scale computing systems.

8
Operating System Jhulan Kumar
• The real time operating system can be classified into two categories:
1. hard real time system and 2. soft real time system.
• A hard real-time system guarantees that critical tasks be completed on time. This goal
requires that all delays in the system be bounded, from the retrieval of stored data to the
time that it takes the operating system to finish any request made of it. Such time
constraints dictate the facilities that are available in hard real-time systems.
• A soft real-time system is a less restrictive type of real-time system. Here, a critical real-
time task gets priority over other tasks and retains that priority until it completes. Soft real
time system can be mixed with other types of systems. Due to less restriction, they are
risky to use for industrial control and robotics.

Operating System Services

Following are the five services provided by operating systems to the convenience of the
users.
1. Program Execution
The purpose of computer systems is to allow the user to execute programs. So the operating
system provides an environment where the user can conveniently run programs. Running a
program involves the allocating and deallocating memory, CPU scheduling in case of
multiprocessing.
2. I/O Operations
Each program requires an input and produces output. This involves the use of I/O. So the
operating systems are providing I/O makes it convenient for the users to run programs.
3. File System Manipulation
The output of a program may need to be written into new files or input taken from some files. The
operating system provides this service.
4. Communications
The processes need to communicate with each other to exchange information during execution. It
may be between processes running on the same computer or running on the different
computers. Communications can be occur in two ways: (i) shared memory or (ii) message
passing
5. Error Detection
An error is one part of the system may cause malfunctioning of the complete system. To avoid
such a situation operating system constantly monitors the system for detecting the errors. This
relieves the user of the worry of errors propagating to various part of the system and causing

9
Operating System Jhulan Kumar
malfunctioning.
Following are the three services provided by operating systems for ensuring the efficient operation
of the system itself.
1. Resource allocation
When multiple users are logged on the system or multiple jobs are running at the same time,
resources must be allocated to each of them. Many different types of resources are managed
by the operating system.
2. Accounting
The operating systems keep track of which users use how many and which kinds of computer
resources. This record keeping may be used for accounting (so that users can be billed) or
simply for accumulating usage statistics.
3. Protection
When several disjointed processes execute concurrently, it should not be possible for one process
to interfere with the others, or with the operating system itself. Protection involves ensuring
that all access to system resources is controlled. Security of the system from outsiders is also
important. Such security starts with each user having to authenticate him to the system,
usually by means of a password, to be allowed access to the resources.

System Call:

• System calls provide an interface between the process and the operating system.
• System calls allow user-level processes to request some services from the operating
system which process itself is not allowed to do.
• For example, for I/O a process involves a system call telling the operating system to read
or write particular area and this request is satisfied by the operating system.

The following different types of system calls provided by an operating system:

Process control
• end, abort
• load, execute
• create process, terminate process
• get process attributes, set process attributes
• wait for time
• wait event, signal event allocate and free memory
• Program- It is a set of instructions, and by executing them step by step we can get any service
done, it is a passive entity and resides on the disk.

10
Operating System Jhulan Kumar
• Process- Instance of program or a program under execution. It is an active entity which resides
in the main memory.
Operations on Process: -

o Creating process

o Running, scheduling, Suspending, Resuming, Blocking

o Terminating a process

• Modes of operating system

Operating system operates in two modes, which are: -

o User Mode

- It is also known as non-privileged mode.

- Preemption is always possible

- All user programs execute in this mode.

o Kernel Mode

- It is also known as privileged mode, supervisory mode, system mode.

- No preemption is allowed.

- Operating system’s program run in this mode.

• Process Control Block- Process Control Block is defined as a data structure that contains all the
information related to a process. The process control block is also called task control block or entry
of the process table, etc.

11
Operating System Jhulan Kumar

• Attributes of PCB: -
o Process ID
o Program Counter status
o Process state
o Memory Management data
o CPU scheduling data
o CPU registers
o Address space for process
o Accounting data
o I/O information
o User’s Identification
o List of open files
o List of open devices
o Priority

• Process State Diagram:

• Threads- A thread may be defined as a flow of execution through some process code.
o A thread is also known as a lightweight process.
o Threads improve application’s performance by providing parallelism.
o Threads improve the performance of operating systems by reducing the process overhead.
o Thread is equivalent to a classical process and it is a software approach for software
improvement.
o Every thread belongs to only one process and not a single thread can exist outside a process.

12
Operating System Jhulan Kumar

• Types of Threads-
o There are two types of threads, which are:
- User Level Threads

- Kernel Level Threads

• Difference between ULT and KLT

USER LEVEL THREAD KERNEL LEVEL THREAD

User thread are implemented by users. kernel threads are implemented by OS.

OS doesn’t recognize user level threads. Kernel threads are recognized by OS.

Implementation of Kernel thread is


Implementation of User threads is easy. complicated.

Context switch time is less. Context switch time is more.

Context switch requires no hardware support. Hardware support is needed.

If one kernel thread performs blocking


If one user level thread perform blocking operation, then another thread can continue
operation then entire process will be blocked. execution.

User level threads are designed as dependent Kernel level threads are designed as
threads. independent threads.

• Why do we need Threads over Process?


o Creating a process and switching between processes is costly.
o Processes have separate address space
o Sharing memory areas among processes is non-trivial.

• Difference between Process and threads:

Process Thread

Thread is a light-weight process which


Process is resource intensive or heavy weight. requires less resources than a process.

Process switching needs interaction with the Thread switching doesn’t require any
OS.
interaction with the operating system.

13
Operating System Jhulan Kumar

Process Thread

In multiprocessing environments, each process


has its own memory and file resources but All threads can share the same open files,
executes the same code. child processes.

If one process gets blocked, then another Whereas if one thread gets blocked and
process cannot run until the first process is waiting, a second thread of the same task
unblocked. can execute.

Multiple processes without threads needs more Multi-threaded processes use less
resources. resources.

In multiple processes each process operates Thread can read, write or alternate
independently of the others. another thread's data.

• Multithreading- It is an ability of the CPU to execute multiple threads concurrently.


o Advantages of Multithreading:
- Resource sharing

- Responsiveness

- Less context switching overhead i.e., better economical design.

- Efficiently utilize multiprocessor architecture.

o Multithreading Models:
- Many-to-one model

- One-to-one model

- Many-to-model

• Schedulers in OS- Schedulers in OS are special system software. They help in scheduling the
processes in various ways.
Types of Schedulers:

o Long-term Scheduler
- A long-term scheduler is to maintain a good degree of multiprogramming.

- Long-term scheduler is also known as Job Scheduler.

o Short-term Scheduler
- A short-term scheduler is to increase the system performance.

- Short-term scheduler is also known as CPU Scheduler

o Medium-term Scheduler
- A medium-term scheduler is to perform swapping

- The medium-term scheduler reduces the degree of multiprogramming.

• Dispatcher- The dispatcher is the module that gives control of the CPU to the process selected
by the scheduler. This function involves:

14
Operating System Jhulan Kumar

o Switching context.
o Switching to user mode.
o Jumping to the proper location in the newly loaded program.
The dispatcher needs to be as fast as possible, as it is run on every context switch. The time
consumed by the dispatcher is known as dispatch latency.
• Process Queues- The Operating system manages various types of queues for each of the process
states. The PCB related to the process is also stored in the queue of the same state. If the Process
is moved from one state to another state, then its PCB is also unlinked from the corresponding
queue and added to the other state queue in which the transition is made.
o Queues Maintained by Operating System:
- Job queue

- Ready queue

- Waiting queue

• Scheduling Criteria-
o CPU Utilization: CPU should be as busy as possible.
o Throughput: Number of processes completed per unit.
Throughput= N/L,
where N = number of processes,

and L = Maximum Completion Time – Minimum Arrival Time


o Arrival Time: Time at which the process arrives in the ready queue.
o Completion Time: Time at which process completes its execution.
o Burst Time: Time required by a process for CPU execution.
o Turn Around Time: Completion Time – Arrival Time
o Waiting Time (W.T): Turnaround Time – Burst Time

• Starvation- Indefinite waiting of processes to get CPU cycles for execution.

• CPU Scheduling Algorithm


FCFS (First Come First Serve)-
- It schedules the processes according to the order of their arrival time.

- Non-preemptive in nature.

- No starvation.

- Simple

- Easy to understand and implement

- In this if shorter jobs arrive later than long jobs, so shorter jobs have to wait for a long
time, and this phenomenon is known as Convoy Effect.
o Shortest Job First(SJF) or Shortest Job Next (SJN)-
- It executes jobs with smaller CPU burst first.

- Non-preemptive in nature.
- Gives less average waiting time than all other scheduling algorithms

- Jobs with large burst time may starve the CPU.

15
Operating System Jhulan Kumar

- Poor response time

- Unimplementable, because it is impossible to predict burst time of processes in advance.

- Used as Benchmark to measure performance of all other processes.

- High Throughput.

o Shortest Remaining Job first (SRTF)-


- It first executes jobs who have the smallest remaining burst time.
- Pre-emptive in nature.

- Jobs with large burst time may starve the CPU.

- Poor response time, but better than SJF.

- High throughput.

- Unimplementable, because it is impossible to predict burst time of processes in advance.

• Round Robin
o Round robin scheduling is similar to FCFS scheduling, except that CPU bursts are assigned with
limits called time quantum.
o Always Pre-emptive in nature.
o Best response time.
o Ready queue is treated as a circular queue.
o Mostly used for a time-sharing system.
o With large time quantum, it acts as FIFO
o No priority.
o No starvation.

• Priority Scheduling
o Every process assigned with a number, and these numbers will be the priority of that process.
o Processes with lower priority may starve for CPU.
o No idea of response time
o Mostly used in Real-time operating systems.
o Can be preemptive and non-preemptive both in nature.

• Longest Job First


o First executes the process with longer CPU burst.
o Completely opposite of SJF.
o Non-preemptive in nature.
o Starvation exists.

• Longest Remaining Time First


o First executes the process with longer CPU burst.
o Completely opposite of SRTF.
o Preemptive in nature.
o Starvation exists.

• Highest Response Ratio Next


16
Operating System Jhulan Kumar

o A Response Ratio is calculated for each of the available jobs and the Job with the highest
response ratio is given priority over the others.
o Response Time= (W+S)/S Where, W → Waiting Time S → Service Time or Burst Time
o Most optimal scheduling algorithm.
o Non-preemptive in nature.
o No starvation.

• Multilevel Queue Scheduling


o As only one ready queue is there so only one scheduling technique is used. Therefore, to use
more than one scheduling can be used with multilevel queue scheduling.
o Each queue can have different priorities.
o Starvation exists.

• Multilevel Feedback Queues


o Similar to Multilevel queue scheduling, except jobs may be moved from one queue to another.
o It can adjust each job’s priority.
o Improved version of Multilevel queue scheduling.
o No starvation.

Practice questions

Note: - Gantt chart of each question is given. Students need to prepare it and calculate
the response time, number of contexts switching, arrival time, turnaround time etc.
and verify it with the given solution for the exercise.

17
Operating System Jhulan Kumar

18
Operating System Jhulan Kumar

19
Operating System Jhulan Kumar

• Deadlock- A Deadlock is a situation where each of the computer processes waits for a resource
which is being assigned to some other process. In this situation, none of the process gets executed
since the resource it needs is held by some other process which is also waiting for some other
resource to be released.
Minimum two processes are required for deadlock to occur.

• Necessary Conditions of Deadlock:


o Mutual exclusion- Mutual exclusion simply means that two resources at the same time cannot
access a single resource.
o Hold and Wait- Hold and Wait says that a process is waiting for some resources while keeping
another resource with it at the same time.
o No-preemption- A process holding a resource will release it voluntarily after the completion
of the process, and no other process can take that resource from the process holding it.
o Circular Wait- A process A is holding resource R3 and waiting for resource R1 and another
process B holding resource R1 but waiting for some other resource R2 and resource R2 is
occupied by some process C which is waiting for resource R3. In this situation all the processes
are waiting for each other in a cyclic manner, no process wants to release the resource and
continues to wait for others. In this way all the processes wait for each other for indefinite time
and can lead to deadlock.
• Strategies For Handling Deadlock
o Deadlock Prevention: Don’t let a deadlock occur in the system by violating any one of the
necessary conditions. It involves:
- Infinite resources

20
Operating System Jhulan Kumar

- No sharing

- No waiting

- Preempt resources

- Allocate all resources at the beginning.

- Every process use the same order for accessing resource

o To violate any of the four conditions


- Mutual exclusion: Cannot be violated as this condition is hardware dependent

- Hold & Wait: Conservative approach; Do not hold; Wait timeout.

- Non-Preemption: To violate forcefully preemption.

- Circular-wait: Give unique numbers to each process either in increasing or decreasing


order. If a process requires less no. of resource, it must release all higher no. of resource
& request for all required no. again.
• Deadlock Avoidance: Avoid occurrence of deadlock in the system by various methods. Checks
whether the system is in a safe state or not.

o A system is said to be safe if and only if the needs of all the processes could be satisfied with
the available resources in some order. The order is known as a safe sequence.
o There may exist multiple safe sequences.
o Single instance of resources using resource allocation graph.
o Banker’s Algorithm or Safety Algorithm is used to allocate multiple instances of resources to
the processes.

o Need= Max- Allocation,


o r>= p (n-1) +1; where, r = number of available resources (deadlock free), n= maximum
number of resources needed by each process, p = number of processes.
• Deadlock detection and Recovery: Do not check for any safety and whenever any process
requests some resources then allocate those resources immediately. If a deadlock occurs then
identify it and recover it by suitable methods.
o Deadlock detection involves:
- Scan the RAG (Resource allocation Graph)

- Detect a cycle in a graph.

- Recovery from deadlock.

- When we activated the detection algorithm, CPU utilization has decreased or no CPU
utilization. Most of the processes are blocked.

21
Operating System Jhulan Kumar

o Deadlock recovery involves:


- Abort all deadlocked processes.

- Abort one process at a time and check, repeat this process until deadlock is removed.

- Rollback to safe state and restart from that state.

- Preempt some resources from processes and give these resources to other processes
until the deadlock cycle is broken.
• Deadlock Ignorance: This involves completely ignoring the concept of deadlock, according to
this no deadlock exists. It uses the Ostrich approach.
• Inter-process Communication-
o Independent processes- Two or more processes will execute autonomously without affecting
the output or order of execution of other processes.
- No states will be shared with other processes

- Order of scheduling doesn’t matter.

- Output of a process is independent of order of execution of other processes.

o Co-operating processes- Two or more processes are said to be co-operative if and only if they
get affected by or affect the execution of other processes.
- These processes require an IPC mechanism that will allow them to exchange information
and data like messages, shared memory, shared variables, etc.
- Shared resources, speed-up and modularity are the advantages of co-operating
processes.
• IPC Models- There are primarily two IPC models:
o Shared memory
o Message Passing

• Concurrency
o Able to run multiple applications at the same time.
o Allows better resource utilization
o Better performance
o Better average response time

• Synchronization- The procedure which is involved in preserving the order of execution of


cooperative processes is called Process Synchronization.
Need of Synchronization-

o When two or more processes execute concurrently while sharing some system resources. So,
all these processes execute properly, we need proper synchronization between their actions.
o Process synchronization is to avoid the inconsistent output.

• Critical Section- Critical section is a small piece of code or a section in the program from where a
process accesses the shared resources concurrently during its execution.
• Race Condition- The final output produced completely depends upon the execution order of
instructions of processes.

22
Operating System Jhulan Kumar

• Criteria of Synchronization Mechanism-


o Mutual Exclusion- It says that two processes cannot be present in the critical section
simultaneously.
o Progress- Process running outside the critical section can never block other interested
processes from entering the critical section when it is free.
o Bounded waiting- Due to mutual exclusion the waiting time of a process must be finite.

• Lack of Process Synchronization can generate-


o Inconsistency in the system.
o Loss of data
o Deadlock

• Semaphore- A semaphore is a integer variable that provide synchronization between multiple


processes executing concurrently in a system.
• Operations on Semaphore
o Wait operation
- Also known as down, or P operation.

- It decreases semaphore’s value by 1.

- It is used in the entry part of the critical section.

- If a resource is not free, then block the process.

o Signal operation
- Also known as UP, or V operation.

- It increases semaphore’s value by 1.

- It is used in the exit part of the critical section to release the acquired lock.

• Types of semaphore-
o Binary semaphore- Uses limited number, only 0 and 1.
- It is busy waiting, known as the spin lock issue.

- It can work on a single instance of a resource.

- It’s wait operation should complete only for a single process.

o Counting Semaphore- It can use any integer.


- Its negative value indicates the number of requests in a waiting queue.

- Its positive value indicates the number of instances of resource R currently available for
allocation, and there is no request in the waiting queue.
- While using multiple semaphores, following proper ordering is highly important.

o Strong semaphore- It is that semaphore whose definition includes policy of FIFO queue.
o Weak semaphore- A semaphore that doesn’t specify the order in which processes are removed
from the queue.

• Classical Synchronization Problems


o Sleeping barber problem
o Bounded buffer problem

23
Operating System Jhulan Kumar

o Producer Consumer problem


o Dining Philosophers Problem
o Reader’s writer’s Problem

• Synchronization Mechanism
1. Lock variable
- Software mechanism implemented in user mode.

- Solution to busy waiting.


- Completely failed mechanism because it cannot satisfy basic criteria of synchronization,
i.e., mutual exclusion.
2. Test and Set Lock
- Uses test & Set instruction to maintain synchronization between processes executing
concurrently.
- Ensures mutual exclusion.

- Deadlock free.

- Does not guarantee bounded waiting.

- Some processes may suffer from starvation.

- Suffers from spin lock.

- It is not neutral architecturally because it needs the OS to support test-and-set


instruction.
- It is a solution to busy waiting.

3. Turn variable
- It uses a variable called turn to provide the synchronization among processes.
- Ensures mutual exclusion.

- Follows a strict alternation approach, so progress is not guaranteed.

- Ensures bounded waiting because processes execute one by one and there is a
guarantee that each process will get a chance to execute.
- Starvation doesn’t exist.

- It is neutral architecturally because it doesn’t need any support of the operating system.

- Deadlock free.

- It is a solution to busy waiting.

24
Operating System Jhulan Kumar

- No guarantee of bounded waiting.

4. Peterson Solution
- It is a synchronization mechanism implemented in user mode, it uses two variables:
turn and interest.
- Satisfy mutual exclusion.

- Satisfy progress.

- Bounded waiting is not guaranteed.

****

25

You might also like