0% found this document useful (0 votes)
2 views

MODULE-3

The document discusses process synchronization and deadlocks in operating systems, emphasizing the importance of ensuring data consistency when multiple processes access shared resources. It covers key concepts such as mutual exclusion, race conditions, and various synchronization mechanisms like mutex locks, semaphores, and Peterson's solution. Additionally, it explains the critical section problem and provides examples of how race conditions can occur and be resolved.

Uploaded by

albytomy07
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

MODULE-3

The document discusses process synchronization and deadlocks in operating systems, emphasizing the importance of ensuring data consistency when multiple processes access shared resources. It covers key concepts such as mutual exclusion, race conditions, and various synchronization mechanisms like mutex locks, semaphores, and Peterson's solution. Additionally, it explains the critical section problem and provides examples of how race conditions can occur and be resolved.

Uploaded by

albytomy07
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 52

23 CST 206

OPERTING SYSTEMS
MODULE-III
Abraham Silberschatz, Peter Baer Galvin, Greg Gagne, ' Operating System Concepts' 9th Edition, Wiley
India 2015
Process synchronization and Dead locks
3.1 Process synchronization, Race conditions
3.2 Critical Section problem, Peterson’s solution
3.3 Synchronization hardware, Mutex Locks
3.4 Semaphores
3.5 Monitors
3.6 Synchronization problem examples
3.7 Deadlocks: Necessary conditions, Resource Allocation Graphs
3.8 Deadlock prevention
3.9 Deadlock avoidance
3.10 Banker’s algorithm
3.11 Deadlock detection
3.12 Deadlock recovery
Process Synchronization
Imagine two people trying to withdraw money from a shared bank
account simultaneously using ATMs.
If both withdraw at the same time, the account balance could be
updated incorrectly.
Process synchronization is like a mechanism ensuring that one
person completes their transaction before the other begins.
• Process synchronization ensures that multiple processes can execute
concurrently without interference, maintaining data consistency.

• It is crucial in scenarios where processes share resources, such as memory, files,


or variables, to prevent race conditions.

Key Objectives of Process Synchronization:

• Mutual Exclusion: Only one process can access a critical section at a time.

• Progress: Processes not in the critical section should not block others from
entering.

• Bounded Waiting: There is a limit on how long a process waits to enter the
critical section.

In modern operating systems, synchronization mechanisms ensure that shared


resources are used safely, avoiding problems like data corruption or deadlocks.
Bounded-Buffer – Shared-Memory Solution
• Shared data
#define BUFFER_SIZE 10
typedef struct {
...
} item;

item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;

• Solution is correct, but can only use BUFFER_SIZE-1 elements


Bounded-Buffer – Producer
item next_produced;
while (true) {
/* produce an item in next produced */
while (((in + 1) % BUFFER_SIZE) == out) ; /* do nothing */
buffer[in] = next_produced;
in = (in + 1) % BUFFER_SIZE;
}
Bounded Buffer – Consumer
item next_consumed;
while (true) {
while (in == out) ; /* do nothing */
next_consumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
/* consume the item in next consumed */
}
• Processes can execute concurrently
• May be interrupted at any time, partially completing execution

• Concurrent access to shared data may result in data inconsistency

• Maintaining data consistency requires mechanisms to ensure the


orderly execution of cooperating processes

Illustration of the problem:


• Suppose that we wanted to provide a solution to the consumer-
producer problem that fills all the buffers.
• We can do so by having an integer counter that keeps track of the
number of full buffers.
• Initially, counter is set to 0. It is incremented by the producer after
it produces a new buffer and is decremented by the consumer after it
consumes a buffer.
Producer
while (true)
{
/* produce an item in next produced */

while (counter == BUFFER_SIZE) ;


/* do nothing */
buffer[in] = next_produced;
in = (in + 1) % BUFFER_SIZE;
counter++;
}
Consumer
while (true)
{
while (counter == 0)
; /* do nothing */
next_consumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
counter--;
/* consume the item in next consumed */
}
Race Condition
• counter++ could be implemented as

register1 = counter
register1 = register1 + 1
counter = register1

• counter-- could be implemented as


register2 = counter
register2 = register2 - 1
counter = register2

• Consider this execution interleaving with “count = 5” initially:


S0: producer execute register1 = counter {register1 = 5}
S1: producer execute register1 = register1 + 1 {register1 = 6}
S2: consumer execute register2 = counter {register2 = 5}
S3: consumer execute register2 = register2 – 1 {register2 = 4}
S4: producer execute counter = register1 {counter = 6 }
S5: consumer execute counter = register2 {counter = 4}
Imagine two people trying to fill a jar with candies.
Each person:
• Checks how many candies are already in the jar (reads the jar's count).

• Decides to add one more candy based on the count they see.

• Updates the total count on a notepad after adding the candy.

• If both people act simultaneously, they might:

• Both check the initial count (e.g., 5 candies) at the same time.

• Add their candy and assume the new count is 6 (ignoring the other's addition).

• Write the same count (6) on the notepad, even though two candies were added, and the total should
be 7.

• The result? The jar physically has 7 candies, but the notepad incorrectly shows 6, because the
actions of the two people overlapped.

• This illustrates a race condition: the final result depends on the timing of their actions, and without
proper coordination, the outcome is incorrect.
race condition
• A race condition occurs when multiple threads or processes
access and manipulate shared data concurrently, and the
outcome of the execution depends on the order in which these
accesses take place.

• Race conditions are often the root cause of many concurrency


bugs and are notoriously difficult to debug because the sequence
of execution can vary with each run.

• Example: if two threads simultaneously try to update a shared


variable without proper synchronization, the result can be
unpredictable and inconsistent.
Resolving Race Conditions
• The solution to a race condition is to ensure mutual exclusion,
which prevents multiple threads from entering the critical
section at the same time.

This can be achieved using mechanisms like:

• Mutex Locks

• Semaphores

• Monitors
Critical Section Problem
• The Critical Section Problem arises when multiple processes or
threads access shared resources (such as variables, files, or
databases) concurrently, and their execution results depend on
the order of execution.

• To avoid issues like race conditions, we need to design a


mechanism that ensures only one process/thread can access the
critical section at a time.
Critical Section:
• A critical section is a part of a program where shared resources are
accessed. If multiple threads/processes execute their critical sections
simultaneously, it can lead to inconsistencies or corruption of data.

Requirements for a Solution:

• Mutual Exclusion: Only one process can access the critical section at
a time.

• Progress: If no process is in the critical section, others waiting to


enter must eventually be allowed.

• Bounded Waiting: A process should not have to wait indefinitely to


enter the critical section.
Imagine a single-lane bridge connecting two towns.
• Only one car can pass through the bridge at a time (Mutual Exclusion).

• If there’s no car on the bridge, any car waiting at either side should be
allowed to cross (Progress).

• If multiple cars are waiting, each car will eventually get a turn without
being indefinitely delayed, assuming drivers take turns (Bounded Waiting).

• If drivers do not follow these rules, cars might collide on the bridge (Race
Condition) or one side might monopolize the bridge, leaving the other side
stuck forever (Starvation).

• Proper coordination ensures smooth and fair usage of the bridge.

• This highlights the need for mechanisms to prevent chaos when shared
resources (like the bridge) are accessed by multiple entities (cars).
Solution to the Critical Section Problem

To handle the critical section problem, we use synchronization


mechanisms like:

• Peterson’s Solution: A software-based solution.

• Mutex Locks: Prevent simultaneous access.

• Semaphores: Allow controlled access to shared resources.


Peterson’s Solution
• Peterson’s Solution is a classical software-based algorithm used to solve the Critical
Section Problem in a system with two processes.
• It ensures mutual exclusion, progress, and bounded waiting, fulfilling the three
essential conditions for a proper synchronization mechanism.
It operates by using two shared variables:
• flag[]: An array where flag[i] is set to true by process i when it wants to enter the critical
section.
• turn: A shared variable that indicates which process gets priority if both want to enter the
critical section.
The key idea of Peterson’s solution is:
•A process signals its intent to enter the critical section by setting its flag to true.
•It then gives the other process a chance to proceed by setting the turn variable to the other
process.
•The process waits until the other process is not interested or it's its turn.
Algorithm for Peterson's Solution
Imagine two children, Alex and Sam, wanting to play
For two processes P0 and P1:
Shared variables: with a shared toy (critical section). Before playing:
flag[2] = {false, false} 1.They signal their interest
(set their flag to true).
Turn 2.They agree that the other child can play first
(set turn to the other child).
Each process follows these steps: 3.Alex will wait until Sam no longer wants the toy
(flag[Sam] == false) or it's Alex’s turn.

1. Entry Section: This ensures they don’t fight over the toy, and they
take turns fairly.
Set flag[i] = true (indicating intent to enter).
Set turn = j (allowing the other process to go first if it wants).
Wait until flag[j] == false or turn == i.

2. Critical Section:

Safely execute the critical code.

3. Exit Section:

Set flag[i] = false (indicating exit from the critical section).


#include <stdio.h>
#include <pthread.h> // Function for process 1
#include <stdbool.h> void* process1(void* arg) {
for (int i = 0; i < 5; i++) {
// Entry section
// Shared variables flag[1] = true;
int shared_variable = 0; // Critical section shared resource turn = 0;
bool flag[2] = {false, false}; // Flags to indicate interest while (flag[0] && turn == 0); // Wait if process 0 is in critical section
int turn; // Variable to control turn
// Critical section
// Function for process 0 shared_variable++;
void* process0(void* arg) { printf("Process 1 updated shared variable to %d\n", shared_variable);
for (int i = 0; i < 5; i++) {
// Exit section
// Entry section flag[1] = false;
flag[0] = true; }
turn = 1; return NULL;
while (flag[1] && turn == 1); // Wait if process 1 is in }
critical section
//Main function
// Critical section int main() {
shared_variable++; pthread_t t0, t1;
printf("Process 0 updated shared variable to %d\n",
// Create threads for process 0 and process 1
shared_variable); pthread_create(&t0, NULL, process0, NULL);
pthread_create(&t1, NULL, process1, NULL);
// Exit section
flag[0] = false; // Wait for both threads to finish
} pthread_join(t0, NULL);
return NULL; pthread_join(t1, NULL);
}
printf("Final value of shared variable: %d\n", shared_variable);
return 0;
}
Synchronization hardware
Hardware provides mechanisms that ensure atomicity.

Atomicity--A series of operations are performed as a single, indivisible


step.

Atomic instructions help prevent race conditions by ensuring that


critical sections are executed without interruption.

Atomic instructions usually work directly on a memory location and:


• Read the value of the memory location.
• Perform a modification on the value.
• Store the modified value back to the memory location.
All these steps happen as one indivisible operation.
How Atomic Instructions Work
Atomic instructions achieve atomicity through hardware mechanisms, such
as:

1.Locking the Memory Bus:


•Prevents other processors from accessing the same memory location
until the operation completes.

2.Cache Coherence Protocols:


•Ensures that updates to shared variables are immediately visible to all
processors.

3.Processor-level Instructions:
•Many modern processors include dedicated instructions (e.g., LOCK
prefix in x86 architectures) to perform atomic operations.
Benefits of Atomic Instructions
• Avoid Race Conditions:
• Prevents multiple threads from simultaneously modifying shared
variables.

• Improved Performance:
• Hardware-level atomic instructions are faster than software locks (e.g.,
mutexes or semaphores).
• Foundation for Synchronization Primitives:
• Atomic instructions are the building blocks for locks, semaphores, and
other synchronization mechanisms.
Limitations of Atomic Instructions
• Limited Scope:
• Atomic instructions typically work on small data types (e.g., integers,
pointers) and not on larger, complex structures.

• Busy Waiting:
• When used improperly (e.g., in spinlocks), they can cause threads to
waste CPU cycles.

• Hardware Dependency:
• Atomic instructions rely on specific hardware support, which may not be
available on all systems
Synchronization Hardware
• Many systems provide hardware support for implementing the critical
section code.
• All solutions are based on idea of locking
• Protecting critical regions via locks

• Uniprocessors – could disable interrupts


• Currently running code would execute without preemption
• Generally too inefficient on multiprocessor systems
• Operating systems using this not broadly scalable

• Modern machines provide special atomic hardware instructions


Atomic = non-interruptible
• Either test memory word and set value

• swap contents of two memory words


Solution to Critical-section Problem Using Locks

do
{
acquire lock
critical section
release lock
remainder section
} while (TRUE);
synchronization hardware mechanisms
• Test-and-Set instruction
• Compare-and-Swap instruction

Analogy for Synchronization Hardware

Imagine a hotel room with a door that has a mechanical "occupied" or "vacant"
sign.
The Test-and-Set instruction is like checking the sign on the door:
If the sign says "vacant," you flip it to "occupied" and enter the room (critical
section).
If the sign says "occupied," you must wait until it becomes vacant again.

For Compare-and-Swap, think of a vending machine where you insert a coin


(expected value) and check if it matches the required amount (compare).

If it matches, the machine dispenses a snack (updates the state). If not, you try
again.
1. Test-and-Set Instruction
The Test-and-Set (TAS) instruction is a hardware
instruction that atomically performs the following:

•Tests the value of a memory location.

•Sets the value to true (or any other specified value).

The result of the test determines whether the


process can proceed into the critical section
boolean TestAndSet(boolean *target)
{
boolean old = *target;
*target = true;
return old;
}

• If TestAndSet() returns false, it means the lock was free, and the
process can enter the critical section.

• If it returns true, another process is already in the critical


section, and the current process must wait.
Solution using test_and_set()
Shared Boolean variable lock, initialized to FALSE
 Solution:
do {
while (test_and_set(&lock))
; /* do nothing */
/* critical section */
lock = false;
/* remainder section */
} while (true);
2. Compare-and-Swap Instruction
• The Compare-and-Swap (CAS) instruction performs three
operations atomically:

• Compares the value at a memory location with an expected


value.

• If the values match, updates the memory location to a new


value.

• If the values do not match, it leaves the memory unchanged


int CompareAndSwap(int *value, int expected, int new_value) {
int old = *value;
if (old == expected)
*value = new_value;
return old;
}

The CompareAndSwap operation ensures atomicity, meaning:

•The "compare" (checking *value == expected) and the "swap" (writing


new_value to *value) are performed as a single, indivisible operation.

•This prevents race conditions where multiple threads might


simultaneously check and update *value
Solution using compare_and_swap
• Shared integer “lock” initialized to 0;
• Solution:
do {
while (compare_and_swap(&lock, 0, 1) != 0)
; /* do nothing */
/* critical section */
lock = 0;
/* remainder section */
} while (true);
Bounded-waiting Mutual Exclusion with test_and_set
do {
waiting[i] = true;
key = true;
while (waiting[i] && key)
key = test_and_set(&lock);
waiting[i] = false;
/* critical section */
j = (i + 1) % n;
while ((j != i) && !waiting[j])
j = (j + 1) % n;
if (j == i)
lock = false;
else
waiting[j] = false;
/* remainder section */
} while (true);
Comparison

Feature CompareAndSwap TestAndSet


Current value, expected
Inputs Pointer to a lock variable
value, and new value
Atomicity Compare and update Test and update
Purpose Used for flexible updates Used primarily for locking
Mutex Locks
• OS designers build software tools to solve critical section problem

• Simplest is mutex lock

• Protect a critical section by first acquire() a lock then release() the


lock
• Boolean variable indicating if lock is available or not

• Calls to acquire() and release() must be atomic


• Usually implemented via hardware atomic instructions

• But this solution requires busy waiting


• This lock therefore called a spinlock
The Bathroom Key Scenario.
Imagine a single-person bathroom in a busy office. To avoid confusion and ensure only one person
uses the bathroom at a time:

• Mutex Lock: There is one key to the bathroom door. Whoever holds the key can enter and use the
bathroom.

• Locking (pthread_mutex_lock): When you take the key, you lock the bathroom door. This ensures
no one else can enter while you're inside.

• Critical Section: Being inside the bathroom is like being in the critical section. You're using a
shared resource (the bathroom), and no one else can use it until you're done.

• Unlocking (pthread_mutex_unlock): When you're done, you return the key, unlocking the
bathroom for the next person.

• Waiting (Blocked Threads): If someone else needs the bathroom while you're inside, they have to
wait outside until you finish and return the key.

• Fairness: The person who has been waiting the longest will get the key next (first-come, first-served)
How This Relates to Threads

• The bathroom represents the shared resource (e.g., a file, a


counter, or memory).

• The key represents the mutex lock.

• Taking the key is like calling pthread_mutex_lock().

• Returning the key is like calling pthread_mutex_unlock().

• Waiting outside is like threads being blocked until the mutex


becomes available
acquire() and release()
acquire() {
while (!available)
; /* busy wait */
available = false;
}
release() {
available = true;
}
do {
acquire lock
critical section
release lock
remainder section
} while (true);
Mutex Locks
• A mutex (mutual exclusion) is a synchronization primitive used
to solve the critical section problem, ensuring that only one
thread or process can access a shared resource at a time.
Why Mutex Locks?
• When multiple threads or processes access shared resources
simultaneously, race conditions may occur, leading to
inconsistent results.

• Mutex locks prevent this by ensuring atomicity—a thread must


acquire the lock before entering the critical section and release it
after leaving.
Key Properties of Mutex Locks
• Mutual Exclusion: Only one thread can hold the mutex at a
time.

• Blocking: Threads trying to acquire a locked mutex are put into


a waiting state.

• Ownership: A thread must release the lock it holds before others


can acquire it.

• Fairness: Mutexes often ensure fairness (threads acquire locks


in the order of their requests)
Pthread Functions for Mutex Locks
• pthread_mutex_init(): Initializes a mutex.

• pthread_mutex_lock(): Acquires the lock; blocks if already


locked.

• pthread_mutex_trylock(): Non-blocking attempt to acquire a


lock.

• pthread_mutex_unlock(): Releases the lock.

• pthread_mutex_destroy(): Destroys the mutex.


23 CST 206
OPERTING SYSTEMS
MODULE-III
Abraham Silberschatz, Peter Baer Galvin, Greg Gagne, ' Operating System Concepts' 9th Edition, Wiley
India 2015
Process synchronization and Dead locks
3.1 Process synchronization, Race conditions
3.2 Critical Section problem, Peterson’s solution
3.3 Synchronization hardware, Mutex Locks
3.4 Semaphores
3.5 Monitors
3.6 Synchronization problem examples
3.7 Deadlocks: Necessary conditions, Resource Allocation Graphs
3.8 Deadlock prevention
3.9 Deadlock avoidance
3.10 Banker’s algorithm
3.11 Deadlock detection
3.12 Deadlock recovery

You might also like