0% found this document useful (0 votes)
47 views

Process Synchronization - Docx Ms Word

its help full

Uploaded by

Talha Manzoor
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views

Process Synchronization - Docx Ms Word

its help full

Uploaded by

Talha Manzoor
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 13

1

Assignment #01
Submitted To:Ma'am Aleena Aqdus

Submitted By Roll no
Talha bin Manzoor 24
Ahmed kabir 27

Semester:Fourth
Session:2022_2026
Course Title:Operating system
Course code:CS_3102

University of Poonch Rawalakot AJK


Department of CS & IT
2

Process Synchronization
 Concept of process Synchronization
 Race condition and Critical condition
 Mutex Locks and Semaphores
 Dead lock and Starvation problem and its solution

 Process Synchronization Concept


Process synchronization is a fundamental concept in operating system
that enables multiple processes to coordinate their access and use of
shared resources, such as memory, files, devices and data structure.
In process synchronization there are two type of processes.
Cooperative Process: in Cooperative Process execution of one process
will effect the other process because they share their common (variable,
memory and resources).
Independent process: it will not effect the process because nothing is
common between them.

 Race condition
When more than one process is executing the same code or accessing the
same memory or any shared variable in that condition there is a
possibility that the output or the value of the shared variable is wrong so
for that all the processes doing the race to say that my output is correct
this condition known as a race condition.Several processes access and
process the manipulations over the same data concurrently, and then the
outcome depends on the particular order in which the access takes place.
3

Example
Initially shared=5

Let’s say there are two processes P1 and P2 which share a common
variable (shared=10), both processes are present in – queue and waiting
for their turn to be executed. Suppose, Process P1 first come under
execution, and the CPU store a common variable between them
(shared=5) in the local variable (X=5) and increment it by 1(X=6), after
then when the CPU read line sleep(1),it switches from current process P1
to process P2 present in ready-queue. The process P1 goes in a waiting
state for 1 second.

P1 P2
int int y=shared;
X=shared;
X++; y--;

Sleep(1); Sleep(1);

Shared X; Shared y;

Now CPU execute the Process P2 line by line and store common variable
(Shared=5) in its local variable (Y=5) and decrement Y by 1(Y=4), after
then when CPU read sleep(1), the current process P2 goes in waiting for
state and CPU remains idle for some time as there is no process in ready-
queue, after completion of 1 second of process P1 when it comes in
ready-queue, CPU takes the process P1 under execution and execute the
4

remaining line of code (store the local variable (X=6) in common variable
(shared=6) ), CPU remain idle for sometime waiting for any process in
ready-queue,after completion of 1 second of Process P2, when process P2
comes in ready-queue, CPU start executing the further remaining line of
Process P2(store the local variable (Y=4) in common variable
(shared=4) ).
We are assuming the final value of a common variable(shared) after
execution of Process P1 and Process P2 is 10 (as Process P1 increment
variable (shared=10) by 1 and Process P2 decrement variable
(shared=11) by 1 and finally it becomes shared=10). But we are getting
undesired value due to a lack of proper synchronization.

Critical condition
A critical section is a part of a program where shared resources, like
variables or data structures, are accessed. To avoid conflicts and ensure
data consistency, only one process or thread can execute in the critical
section at a time. This is crucial in concurrent programming to prevent
issues like race conditions.When more than one processes try to access
the same code segment that segment is known as the critical section.
5

In the entry section, the process requests for entry in the Critical
Section.

Any solution to the critical section problem must satisfy three


requirements:

Mutual Exclusion: If a process is executing in its critical section, then


no other process is allowed to execute in the critical section.
Progress: If no process is executing in the critical section and other
processes are waiting outside the critical section, then only those
processes that are not executing in their remainder section can
participate in deciding which will enter the critical section next, and the
selection can not be postponed indefinitely.
Bounded Waiting: A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has
made a request to enter its critical section and before that request is
granted.
Peterson’s Solution
Peterson’s Solution is a classical software-based solution to the critical
section problem. In Peterson’s solution, we have two shared variables:

 boolean flag[i]: Initialized to FALSE, initially no one is interested in


entering the critical section
 int turn: The process whose turn is to enter the critical section.
6

Peterson’s Solution preserves all three conditions

Mutual Exclusion is assured as only one process can access the critical
section at any time.
Progress is also assured, as a process outside the critical section does
not block other processes from entering the critical section.
Bounded Waiting is preserved as every process gets a fair chance.

Disadvantages of Peterson’s Solution

It involves busy waiting. (In the Peterson’s solution, the code statement-
“while(flag[j] && turn == j);” is responsible for this. Busy waiting is
not favored because it wastes CPU cycles that could be used to perform
other tasks.)
It is limited to 2 processes.
Peterson’s solution cannot be used in modern CPU architectures.
Mutex Lock
A Mutex lock (short for mutual exclusion lock) is a synchronization
primitive used in concurrent programming to prevent multiple threads
from accessing a shared resource simultaneously. This ensures that only
one thread can access the resource at a time, avoiding conflicts and
potential data corruption.
7

How Mutex Locks Work

Locking: When a thread wants to access a shared resource, it must first


acquire the Mutex lock. If the lock is already held by another thread, the
requesting thread will be blocked until the lock is released.

Critical Section: Once the thread acquires the lock, it can safely access the
shared resource. This section of code where the resource is accessed is called
the critical section.

Unlocking: After the thread has finished using the shared resource, it
releases the Mutex lock, allowing other threads to acquire it.

Example

#include <iostream>

#include <thread>

#include <mutex>

int counter = 0;

std::mutex mtx;

void increment() {

for (int i = 0; i < 1000; ++i) {

mtx.lock(); // Acquire the lock

++counter; // Critical section

mtx.unlock(); // Release the lock


8

int main() {

std::thread t1(increment);

std::thread t2(increment);

t1.join();

t2.join();

std::cout << "Final counter value: " << counter << std::endl;

return 0;

Mutex Locking: Ensures that only one thread can access the critical section
at a time, preventing race conditions.

Thread Synchronization: The join method ensures that the main thread
waits for the worker threads to complete.

Semaphores

A semaphore is a synchronization primitive used to control access to a


common resource by multiple threads in a concurrent system. It helps
manage how different processes share resources without causing
conflicts.
9

Key Concepts
Semaphore Types:
Counting Semaphore: Can have a value greater than one. It allows
multiple threads to access a limited number of resources.

Binary Semaphore: Also known as a Mutex, it can only have values 0 or


1, ensuring mutual exclusion.

Operations:

Wait (P): Decreases the semaphore value. If the value is zero, the process
is blocked until the value becomes greater than zero.

Signal (V): Increases the semaphore value. If there are any blocked
processes, one of them is unblocked.

#include <iostream>
#include <thread>
#include <semaphore.h>
std::counting_semaphore<1> sem(1); // Initializing a counting semaphore
with a value of 1
void access_resource(int id) {
sem.acquire(); // Wait (P) operation
std::cout << "Thread " << id << " is accessing the resource." <<
std::endl;
std::this_thread::sleep_for(std::chrono::seconds(1)); // Simulate
resource access
std::cout << "Thread " << id << " is releasing the resource." <<
std::endl;
sem.release(); // Signal (V) operation
10

int main() {
std::thread t1(access_resource, 1);
std::thread t2(access_resource, 2);

t1.join();
t2.join();

return 0;
}
Benefits and Challenges
Benefits: Semaphores are versatile and can be used to solve various
synchronization problems, including mutual exclusion and signaling.
Challenges: Incorrect use can lead to issues like deadlocks and priority
inversion.
Dead lock
Deadlock is a situation in computing where two or more processes are
unable to proceed because each is waiting for the other to release
resources. Key concepts include mutual exclusion, resource holding,
circular wait, and no preemption.

Consider an example when two trains are coming toward each other on
the same track and there is only one track, none of the trains can move
once they are in front of each other. This is a practical example of
deadlock.

A situation occurs in operating systems when there are two or more


processes that hold some resources and wait for resources held by
other(s). For example, in the below diagram, Process 1 is holding
11

Resource 1 and waiting for resource 2 which is acquired by process 2,


and process 2 is waiting for resource 1.

Examples of Deadlock

Process 1 is holding Resource 1 and waiting for Resource 2.

Process 2 is holding Resource 2 and waiting for Resource 1.

This creates a circular wait, leading to a deadlock.

Solution:

Avoiding Deadlock

To avoid deadlock, we can follow these strategies:

1. Deadlock Prevention: Change the rules so that deadlock can’t happen.

For example, make sure processes request resources in a specific order.


2. Deadlock Avoidance: Use algorithms to check if granting a resource will

lead to a safe state. If not, the request is denied.


12

3. Deadlock Detection and Recovery: Allow deadlocks to happen but have

a system to detect and fix them, like terminating one of the processes.

Process 1 requests Resource 1 first, then Resource 2.

Process 2 requests Resource 2 first, then Resource 1.

By following this order, we avoid the circular wait, and thus, prevent
deadlock.

Starvation problem
Starvation in computing is when a process keeps waiting for resources
(like CPU time) but never gets them because other processes are always
given priority.

It is a disadvantage of priority scheduling in which a process is ready


to run but waiting for CPU indefinitely because of low priority this lead to
starvation.

Solution of Starvation:

Aging: Aging is technique of increasing priority of process that


wait in a system for a long time.

pro B.T pri


P1 5 0(low)
P2 30 5(high)
P3 20 4
P4 10 1
13

You might also like