0% found this document useful (0 votes)
46 views32 pages

Concurrency Mutual Exclusion and Synchronization

Uploaded by

riddheshsawnt
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views32 pages

Concurrency Mutual Exclusion and Synchronization

Uploaded by

riddheshsawnt
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

Concurrency –

Mutual Exclusion
and
Synchronization
Concurrency – What and Why?

Execution of set of Several processes /


multiple instructions at process threads run in
the same time parallel

Communication plays Sharing results in


an important role in sharing resources –
Concurrency memory, files, i/o
Challenges in Concurrency

Race Condition Deadlock Starvation


when shared resources are when processes are stuck waiting some processes or threads are
accessed simultaneously without for each other's resources to unfairly denied resources they
proper synchronization. become available, leading to a need, despite being ready to use
standstill. them.
• A race condition is a situation in
Race Condition concurrent programming where the
final outcome depends on the
sequence or timing of execution of
multiple processes.
Race Condition
• Several processes access and
manipulates shared data
simultaneously.
• Final value of shared data depends
upon which process runs precisely
when.
• in = variable pointing to next free
slot
• out = variable pointing to next file
to be printed
• A critical region is a specific section of code, or a segment of a
program where shared resources are accessed or manipulated.
• Mutual Exclusion- Only one process or thread can access a critical
Critical Region section of code at any given time.
Achieving Mutual
Exclusion
• Disabling Interrupts
o temporarily preventing the CPU from
responding to hardware interrupts
while executing a critical section of
code.
o Interrupts are signals sent by hardware
devices (like keyboard, mouse, timer) to
the CPU, which can preempt the current
execution flow to handle the interrupt
service routine (ISR).
o Only in single processor system
Spinlock/ Busy Waiting
• Lock variables- busy waiting/spin waiting
• Continuously testing a variable until some value appears
is called busy waiting.
• It should usually be avoided, since it wastes CPU time.
• Only when there is a reasonable expectation that the
wait will be short is busy waiting used.
• A lock that uses busy waiting is called a spin lock.
Summary

Concurrency - Executing multiple processes simultaneously

Shared Resources- To achieve concurrency, hardware and software resources are


shared.

Challenges- Race Condition, Deadlock, Starvation

Critical Region- Where shared resources are accessed

Mutual exclusion- At a time only single process can enter critical section.
Achieve Mutual Exclusion

Hardware Solution Software Solution


Disabling Interrupts Spinlock
• Low level technique • Spin in loop
• Works well in single processor system • Busy waiting
• CPU overhead
• Can cause deadlock
Achieve Mutual Exclusion

• Software Solution
o Spinlocks
o Semaphore
o Mutex
Atomic Operations

• Atomic operations are actions


that happen as a single, indivisible
unit.
• They ensure that either the entire
operation completes or none of it
does, without interruption.
• When an atomic operation is
executed, it happens fully without
being split or interrupted by other
threads or processes.
Semaphore
Semaphore is a synchronization primitive used in concurrent programming to control access to a
shared resource by multiple threads or processes.

Semaphore value how many units of a resource are available in system.

A semaphore is essentially a counter, which can be incremented (up) or decremented (down).

Wait (P) - Decrements semaphore by 1

Signal (V)- Increments semaphore by 1


Semaphore operations
• Initial values of S = Number of resources (Non-negative)
• Wait (P) Operation:
o Process wants to acquires resource, Check count of S and decrements it by 1.
o If S>=0, process acquires resource and starts execution.
o If the count becomes negative, means resource is not available and the requesting
thread/process is blocked until another thread/process increases the count.
• Signal (V) Operation:
o When resource is used by process, it releases the resource and increments the
semaphore count.
o If there are threads/processes waiting due to a previous wait operation (count was
negative), one of them is allowed to proceed.
Semaphore Benefits:
Thread Synchronization: Prevents race conditions and ensures
orderly access to shared resources.

Deadlock Prevention: Properly designed semaphore usage can help


avoid deadlocks by controlling resource access.

Inter-Process Communication: Semaphores can be used to


synchronize threads across different processes.
Mutex
• A mutex- short for "mutual exclusion"
• is a synchronization mechanism used in concurrent programming to
ensure that only one thread at a time can access a shared resource or
critical section of code.
• It provides a way to prevent race conditions
• Mutex- Binary Semaphore
• Only two states-
oLock - 0 - unavailable
oUnlock- 1 - available
Mutex

• Locking Mechanism:
o Acquire: A thread that wants to access the shared resource or critical section
attempts to acquire (lock) the mutex.
o Release: After finishing its work with the shared resource, the thread releases
(unlocks) the mutex, allowing other threads to acquire it.
• Binary Semaphore: mutex can be implemented using a binary semaphore
o When the mutex is free (unlocked), its semaphore value is 1.
o When the mutex is held (locked), its semaphore value is 0.
• The Producer-Consumer Problem -Bounded Buffer Problem
Producer- • a classic synchronization problem in operating systems that
demonstrates the challenges of coordinating concurrent
consumer Problem processes.
• Producers generate data and place it into a buffer.
• Consumers take the data from the buffer and process it.
Producer-consumer Problem

• Buffer:
o A fixed-size storage area that producers fill, and consumers empty.
o The buffer can be thought of as a circular queue.
• Synchronization:
o Mechanisms are needed to ensure that producers do not overwrite data in
the buffer before it is consumed and that consumers do not try to
consume data that has not been produced yet.
Producer-consumer Problem
• Challenges
• Mutual Exclusion:
▪ Ensuring that only one process (either producer or consumer) accesses
the buffer at a time to prevent data corruption.
• Buffer Overflow:
▪ Preventing producers from adding data to a full buffer.
• Buffer Underflow:
▪ Preventing consumers from removing data from an empty buffer.
Solution- Semaphore and Mutex
• Empty: Counts the number of empty slots in the buffer.
• Full: Counts the number of filled slots in the buffer.
• Mutex: Ensures mutual exclusion for buffer access.
Producer Consumer
do { do {
// Produce an item
wait(empty); // Decrease the count of empty slots wait(full); // Decrease the count of full slots

wait(mutex); // Enter critical section wait(mutex); // Enter critical section


// Add the item to the buffer // Remove the item from the buffer
signal(mutex); // Exit critical section signal(mutex); // Exit critical section

signal(full); // Increase the count of full slots signal(empty); // Increase the count of empty slots
// Consume the item

} while (true); } while (true);


Monitors
• A monitor is a high-level synchronization construct that helps manage access to shared
resources in concurrent programming.
• It combines mutex and condition variables to simplify synchronization.
• The idea is to encapsulate shared resources and the synchronization logic that protects
them.
• Key Features
o Mutual Exclusion: Ensures that only one thread can access the monitor's critical
section at a time.
o Condition Variables: Allow processes to wait for certain conditions to be met and to
signal other processes when those conditions change.
Monitors
• A monitor is a high-level synchronization construct that helps manage access to shared
resources in concurrent programming.
• It combines mutex and condition variables to simplify synchronization.
• The idea is to encapsulate shared resources and the synchronization logic that protects
them.
• A monitor is a software module consisting of one
or more procedures, initialization sequence and local data.
• Mutual Exclusion- only one process at a time can be executing in monitor.
Monitors
• Monitor Procedures: These are the operations that can be performed on the shared
resources. They are encapsulated within the monitor to ensure that the synchronization
rules are followed.
• Condition Variables- for synchronization
o These are used within monitors to allow processes to wait for certain conditions to
be met.
o wait(): This operation causes the current process to block until a specific condition is
true.
o signal(): This operation wakes up one of the processes waiting on the condition
variable. If no processes are waiting, it has no effect.
o broadcast(): This operation wakes up all processes waiting on the condition variable.
Monitor
Structure
Monitor Steps

When a thread enters a monitor to execute a procedure, it acquires the monitor lock.

If another thread tries to enter the monitor while it is locked, it will be blocked until
the monitor becomes available.

Within the monitor, threads can use condition variables to wait for certain conditions
to be met. When a thread calls wait(), it releases the monitor lock and goes to sleep.

When a thread calls signal() or broadcast(), waiting threads are awakened and can
attempt to re-acquire the monitor lock to proceed.
Message Passing
• Message passing - inter-process communication (IPC)
mechanism to allow processes to communicate and
synchronize their actions without sharing the same
memory space.
Message Passing
• Synchronization
o Blocking send, blocking receive
▪ Sender and receiver are blocked until message is delivered.
▪ Sender and receiver are synchronized - each waits for the corresponding
operation to complete before proceeding.
o Non-blocking send, blocking receive
▪ Sender does not wait for the message to be received, continues execution
immediately after sending the message.
▪ The receiver waits for the message to arrive before proceeding.
o Non-blocking send, Non-blocking receive
▪ Both the sender and receiver to continue execution without waiting for the
communication to complete.
Message Passing
• Addressing
o Direct Addressing
o Indirect Addressing
▪ Mailbox- Queues to
hold messages
▪ Decoupling
▪ Producer-consumer
▪ Client-server
Readers-Writer's problem
• Synchronization problem in Operating Systems where multiple processes (or threads)
need to access a shared data area like a file or a database or a block of memory.
• Readers- can only read data
• Writers- can modify data
• Conditions-
• Any number of readers can simultaneously read the file
• Any one writer at a time can write to the file
• If writer is writing , no reader may read it
Readers-Writer's problem- solution
• mutex- Binary Semaphore to
manage Reader's entry

• wrt- Binary Semaphore to


manage Writer's entry

• readcount- counting
semaphore to maitaing
count of readers processes

You might also like