0% found this document useful (0 votes)
15 views34 pages

Lec 14 - Operating - System

Process synchronization is a mechanism in operating systems that manages the execution of multiple processes accessing shared resources to ensure data consistency and prevent issues like race conditions and deadlocks. It categorizes processes as independent or cooperative and employs techniques such as semaphores to control access to critical sections. Key requirements for effective synchronization include mutual exclusion, progress, and bounded waiting to maintain system efficiency and fairness.

Uploaded by

preeti
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views34 pages

Lec 14 - Operating - System

Process synchronization is a mechanism in operating systems that manages the execution of multiple processes accessing shared resources to ensure data consistency and prevent issues like race conditions and deadlocks. It categorizes processes as independent or cooperative and employs techniques such as semaphores to control access to critical sections. Key requirements for effective synchronization include mutual exclusion, progress, and bounded waiting to maintain system efficiency and fairness.

Uploaded by

preeti
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Process Synchronization

Introduction
• Process Synchronization is a mechanism in operating systems
used to manage the execution of multiple processes that access
shared resources.
• Its main purpose is to ensure data consistency, prevent race
conditions (data corruption) and avoid deadlocks (system
freeze) in a multi-process environment.

On the basis of synchronization, processes are categorized as one


of the following two types:
• Independent Process: The execution of one process does not
affect the execution of other processes.
• Cooperative Process: A process that can affect or be affected
by other processes executing in the system.
• Process Synchronization is the coordination of multiple
cooperating processes in a system to ensure controlled access
to shared resources, thereby preventing race conditions and
other synchronization problems.

Improper Synchronization in Inter Process Communication


Environment leads to following problems:
• Inconsistency:
• Loss of Data:
• Deadlock:
Role of Synchronization in IPC
• Preventing Race Conditions: Ensures processes don’t access
shared data at the same time, avoiding inconsistent results.
• Mutual Exclusion: Allows only one process in the critical
section at a time.
• Process Coordination: Lets processes wait for specific
conditions (e.g., producer-consumer).
• Deadlock Prevention: Avoids circular waits and indefinite
blocking by using proper resource handling.
• Safe Communication: Ensures data/messages between
processes are sent, received and processed in order.
• Fairness: Prevents starvation by giving all processes fair
access to resources.
Types of Synchronization
• Competitive: Two or more processes are said to be in
Competitive Synchronisation if and only if they compete for
the accessibility of a shared resource.
Lack of Proper Synchronization among Competing process may
lead to either Inconsistency or Data loss.
• Cooperative: Two or more processes are said to be in
Cooperative Synchronization if and only if they get affected by
each other i.e. execution of one process affects the other
process.
Lack of Proper Synchronization among Cooperating process may
lead to Deadlock.
Linux Code
>>ps/grep "chrome"/wc

• ps command produces list of processes running in linux.


• grep command find/count the lines form the output of the ps
command.
• wc command counts how many words are in the output.
Conditions That Require Process
Synchronization
1. Critical Section: A critical section is a code segment that can
be accessed by only one process at a time. The critical section
contains shared variables that need to be synchronized to
maintain the consistency of data variables. So the critical section
problem means designing a way for cooperative processes to
access shared resources without creating data inconsistencies.

2. Race Condition: A race condition is a situation that may occur


inside a critical section. This happens when the result of multiple
process/thread execution in the critical section differs according
to the order in which the threads execute.
Conditions That Require Process
Synchronization
3. Pre-emption: Preemption is when the operating system stops a
running process to give the CPU to another process. This allows
the system to make sure that important tasks get enough CPU
time. This is important as mainly issues arise when a process has
not finished its job on shared resource and got preempted. The
other process might end up reading an inconsistent value if
process synchronization is not done.
The Critical Section
• A critical section is a part of a program where shared
resources (like memory, files, or variables) are accessed by
multiple processes or threads.
• To avoid problems such as race conditions and data
inconsistency, only one process/thread should execute the
critical section at a time using synchronisation techniques.
• This ensures that operations on shared resources are performed
safely and predictably.
Structure of Critical Section
1. Entry Section
• The process requests permission to enter the critical section.
• Synchronization tools (e.g., mutex, semaphore) are used to
control access.

2. Critical Section: The actual code where shared resources are


accessed or modified.

3. Exit Section: The process releases the lock or semaphore,


allowing other processes to enter the critical section.

4. Remainder Section: The rest of the program that does not


involve shared resource access.
The Critical Section Problem
Shared Resources and Race Conditions
• Shared resources include memory, global variables, files, and
databases.
• A race condition occurs when two or more processes attempt
to update shared data at the same time, leading to unexpected
results.
Example: Two bank transactions modifying the same account
balance simultaneously without synchronization may lead to
incorrect final balance.
Requirements of a Solution
A good critical section solution must ensure:

Correctness - Shared data should remain consistent.


Efficiency - Should minimize waiting and maximize CPU
utilization.
Fairness - No process should be unfairly delayed or starved.
Requirements of Critical Section
Solutions
1. Mutual Exclusion: At most one process can be inside the
critical section at a time.
Prevents conflicts by ensuring no two processes update the shared
resource simultaneously.
2. Progress: If no process is in the critical section, and some
processes want to enter, the choice of who enters next should not
be postponed indefinitely.
Ensures that the system continues to make progress rather than
getting stuck.
3. Bounded Waiting: There must be a limit on how long a
process waits before it gets a chance to enter the critical section.
Prevents starvation, where one process is repeatedly bypassed
while others get to execute.
Solution of Critical Section Problem
A simple solution to the critical section can be thought of as
shown below,

acquireLock();
Process Critical Section
releaseLock();

There are various ways to implement locks in the above pseudo-


code.
Real World Example
• Banking System
• Ticket Booking System
• Print Spooler in a Networked Printer
• File Editing in Shared Documents (e.g., Google Docs, MS
Word with shared access)
Semaphores in Process
Synchronization
• A semaphore is a synchronization tool used in operating
systems to manage access to shared resources in a multi-
process or multi-threaded environment.
• It is an integer variable that controls process execution using
atomic operations like wait() and signal().
• Semaphores help prevent race conditions and ensure proper
coordination between processes.
How Semaphores work?
A semaphore in OS uses two primary atomic operations:

wait(S) P operation
{
[Link]--;
if ([Link] < 0) {
add this process to S.L (waiting list);
block(); // Suspend the process
}
}
• Decrements the semaphore value.
• If the value is less than 0, the process waits until the resource
is available.
• Used to acquire a resource.
Signal(S) V operation
{
[Link]++;
if ([Link] <= 0) {
remove a process P from S.L (waiting list);
wakeup(P); // Resume the blocked process
}
}

• Increments the semaphore value.


• If there are waiting processes, one is awakened.
• Used to release a resource.
Example: Let’s consider two processes P1 and P2 sharing a
semaphore S, initialized to 1:
State 1: Both processes are in their non-critical sections, and S =
1.
State 2: P1 enters the critical section. It performs wait(S), so S =
0. P2 continues in the non-critical section.
State 3: If P2 now wants to enter, it cannot proceed since S = 0.
It must wait until S > 0.
State 4: When P1 finishes, it performs signal(S), making S = 1.
Now P2 can enter its critical section and again sets S = 0.

This mechanism guarantees mutual exclusion, ensuring that only


one process can access the shared resource at a time.
Numerical
1. A counting semaphore S is initialized to 10. Then, 6 P
operations and 4 V operations are performed on S. What is the
final value of S?
2. A counting semaphore S is initialized to 7. Then, 20 P
operations and 15 V operations are performed on S. What is the
final value of S?
Features of Semaphores
Mutual Exclusion: Semaphore ensures that only one process
accesses a shared resource at a time.
Process Synchronization: Semaphore coordinates the execution
order of multiple processes.
Resource Management: Limits access to a finite set of
resources, like printers, devices, etc.
Avoiding Deadlocks: Prevents deadlocks by controlling the
order of allocation of resources.
Types of Semaphores
1. Counting Semaphore
A counting semaphore can have values ranging from 0 to any
positive integer. It is used when multiple instances of a resource
are available and need to be managed.

• Value ranges from 0 to n.


• Manages multiple resource instances.
• Controls access to limited resources.
2. Binary Semaphore
A binary semaphore has only two possible values: 0 and 1. It is
mainly used for mutual exclusion, ensuring that only one process
enters the critical section at a time.

• Value is either 0 or 1.
• Used for mutual exclusion.
• Similar to a mutex lock.
• Managing access to a single critical section
Limitations of Semaphores
• Priority Inversion: A low-priority process holding a semaphore
can block a high-priority one.
• Deadlock: Processes may wait on each other’s semaphores in a
cycle, causing indefinite blocking.
• Complex to Manage: The OS must carefully track wait and
signal calls; misuse can cause errors.
• Busy Waiting: In basic implementations, processes may keep
checking the semaphore value, wasting CPU time.
Classical Problems of Synchronization
1. Bounded-buffer (or Producer-Consumer) Problem,
2. Dining-Philosophers Problem,
3. Readers and Writers Problem,
4. Sleeping Barber Problem
Dining Philosopher Problem
• The Dining Philosopher Problem is a classic synchronization
problem introduced by Edsger Dijkstra in 1965.
• It illustrates the challenges of resource sharing in concurrent
programming, such as deadlock, starvation, and mutual
exclusion.
Problem Statement
• K philosophers sit around a circular table.
• Each philosopher alternates between thinking and eating.
• There is one chopstick between each philosopher (total K
chopsticks).
• A philosopher must pick up two chopsticks (left and right) to
eat.
• Only one philosopher can use a chopstick at a time.
The Challenge:

Design a synchronization mechanism so that philosophers can eat


without causing deadlock (all waiting forever) or starvation
(some never get a chance to eat).
Dining Philosopher
Two states in whole life:
1. Thinking
2. Eating
Issues in the Problem
Deadlock: If every philosopher picks up their left chopstick first,
no one can pick up the right one circular wait.
Starvation: Some philosophers may never get a chance to eat if
others keep eating.
Concurrency Control: Must ensure no two adjacent
philosophers eat simultaneously.
Algorithm
• Each chopstick is represented as a binary semaphore (mutex).
• Philosopher must acquire both left and right semaphores
before eating.
• After eating, the philosopher releases both semaphores.
Pseudocode
semaphore chopstick[5] = {1,1,1,1,1};
Philosopher(i):
while(true) {
think();
wait(chopstick[i]); // pick left chopstick
wait(chopstick[(i+1)%5]); // pick right chopstick
eat();
signal(chopstick[i]); // put left chopstick
signal(chopstick[(i+1)%5]); // put right chopstick
}
Problem
• If all philosophers pick up their left chopstick at the same time,
they will all wait for the right chopstick deadlock.

Deadlock Avoidance
• One common fix is to allow only N-1 philosophers to pick
chopsticks at a time. This breaks the circular wait condition.
• This ensures that at least one philosopher can always eat,
preventing deadlock.

You might also like