0% found this document useful (0 votes)
2 views

Process Synchronization

Assignment operating system

Uploaded by

Hajra bibi
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Process Synchronization

Assignment operating system

Uploaded by

Hajra bibi
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

Process Synchronization

Introduction
 The Significance of Process Synchronization:
Several processes operate simultaneously in a multitasking operating
system, sharing resources such as memory, CPU time, and input/output
devices. To prevent disputes, data corruption, and system instability, it is
crucial that these processes collaborate and coordinate their activities. This
problem is solved by process synchronization, which offers controls over
access to shared resources.
 What is Process Synchronization?
The coordination of several processes to make sure they function well
together is called process synchronization, or just synchronization. In order
to avoid conflicts and preserve data integrity, it entails controlling access to
shared resources or crucial code segments.

The Need for Process Synchronization:


 The Critical Section Problem
The critical section problem is at the core of process synchronization. A
portion of code that allows a process to access shared variables or resources
is known as a critical section. The problem in the critical part addresses:

mutual exclusion: Making sure that only one process at a time can enter
the critical region.
Progress: Ensuring that a procedure will ultimately reach the crucial
phase.

Constrained Waiting: Limiting how many processes are allowed to wait


before entering the crucial area.
 Race Conditions:
When several processes use shared resources at once, race circumstances
arise, which can result in erratic and incorrect behavior. By enforcing
discipline and order, process synchronization mechanisms seek to eradicate
race circumstances.

 Deadlocks and Starvation:


Deadlocks, in which processes are permanently stalled while awaiting
resources held by other processes, can result from improperly synchronized
processes. Another issue is starvation, which occurs when a process is
consistently refused access to resources that it requires.

Methods of Process Synchronization:


Several techniques and synchronization primitives are used to overcome the
difficulties associated with process synchronization:

 Mutual Exclusion:
The basic idea of only permitting one process at a time to enter a critical area
is known as mutual exclusion. To implement mutual exclusion, strategies
like locks and semaphores are employed.

 Semaphores:
Semaphores are primitives for synchronization that are used to communicate
between processes. Depending on their intended usage, they may be count
semaphores or binary semaphores (0 or 1).

 Mutex (Mutual Exclusion) Locks:


Mechanisms known as mutex locks grant exclusive access to a crucial area.
They guarantee that a critical region can only be entered by one thread at a
time.

 Condition Variables:
Processes can wait until a specific condition is met before continuing thanks
to condition variables. They frequently work in tandem with mutex locks.

Process Synchronization Algorithm:


These algorithms are used to coordinate the execution of processes so that
they can share resources without conflicts or errors.
 Dekker’s Algorithm:
 The first-ever mutual exclusion algorithm.
 Ensures that two processes do not access the critical section
simultaneously.
 Uses flags and turn variables to maintain synchronization.

 Peterson’s Algorithm:
 An improvement over Dekker’s algorithm.
 Provides mutual exclusion for two processes.
 Involves two variables (flags and turn) to handle critical section
access.

 Lamport’s Bakery Algorithm:


 A token-based algorithm that assigns “numbers” to processes.
 The process with the smallest number gets access to the critical
section first.
 Useful for ensuring fairness in resource access.

 Sleep-wake Algorithm:
 Focuses on efficient synchronization.
 Processes alternate between sleep and wake states to avoid busy
waiting.
 Often used in systems to conserve resources and improve
responsiveness.

Challenges in Process Synchronization:


These are common issues that arise when multiple processes compete for
shared resources:

 Deadlocks:

 Occur when processes are indefinitely waiting for resources held by


each other.
 Example: Process A holds Resource 1 and waits for Resource 2, while
Process B holds Resource 2 and waits for Resource 1.

 Starvation:

 Happens when some processes are denied access to resources for an


extended period.
 Caused by unfair resource allocation or scheduling.

 Livelocks:
 Similar to deadlocks, but processes keep changing their state
without making progress.
 Example: Two processes continually responding to each other
without resolving resource contention.

 Priority Inversion:
 A lower-priority process blocks a higher-priority process.
 Example: A low-priority task holds a resource needed by a high-
priority task, leading to delays.

Real-World Applications:
Process synchronization plays a crucial role in various fields:

 Operating Systems:
 Manages multiple processes running concurrently.
 Examples: Scheduling CPU tasks, handling I/O operations, or
managing file systems.

 Database Systems:
 Ensures consistent access to data during concurrent transactions.
 Uses locking mechanisms to avoid issues like lost updates or
inconsistent reads.

 Network Protocols:
 Coordinates data transmission across networks.
 Prevents collisions and ensures orderly communication between
nodes.

 Multithreaded Programming:
 Synchronizes threads to avoid race conditions.
 Examples: Ensuring shared memory access is thread-safe or
managing thread pools.

Key Concepts:
 Critical Section:
 A segment of code where shared resources (like variables, files, or
memory) are accessed.
 Ensuring that only one process or thread accesses the critical
section at a time is vital to prevent data inconsistency or
corruption.
 Example: Updating a bank account balance during a transaction.

 Atomic Operations:
 Operations that are performed completely or not at all, without
any interruptions.
 These are indivisible, ensuring consistency even when multiple
processes execute simultaneously.
 Example: Incrementing a counter in a multithreaded environment.

 Context Switching:
 The process of saving the state of one process and loading the
state of another.
 Essential for multitasking but adds overhead to the system.
 Example: Switching between user applications and background
services in an operating system.

 Synchronization Overhead:
 The additional computational cost incurred when implementing
synchronization mechanisms (e.g., locks, semaphores).
 While necessary to maintain data integrity, excessive overhead
can reduce performance.
 Example: A heavily synchronized program might run slower due
to frequent locking and unlocking.

Theoretical Foundations:
 Concurrent Programming:
 Focuses on executing multiple processes or threads
simultaneously, sharing resources or working on different tasks.
 Achieved through techniques like threading, multitasking, or
event-driven programming.
 Example: A web server handling multiple client requests at the
same time.

 Parallel Computing:
 Executes multiple processes or computations simultaneously to
solve a problem faster.
 Typically involves dividing a large task into smaller subtasks
that can run in parallel on different processors or cores.
 Example: Machine learning training on GPUs.

 Distributed Systems:
 A network of independent computers that work together as a
single system to perform tasks.
 Requires coordination to handle resource sharing, task
allocation, and fault tolerance.
 Example: Cloud computing platforms like AWS or Google
Cloud.

Conclusion:
A key idea in concurrent programming and operating systems is process
synchronization. It guarantees that several processes can cooperate effectively and
amicably. Process synchronization techniques are essential for preserving system
stability and data integrity because they handle problems including mutual
exclusion, race situations, deadlocks, and starvation.

The importance of process synchronization, the critical section problem, several


synchronization techniques, real-world examples, and useful applications have all
been covered in this extensive reference. Building reliable and effective concurrent
systems requires a thorough understanding of process synchronization, regardless
of whether you work as a software developer or systems programmer.

You might also like