Open In App

Critical Section in Synchronization

Last Updated : 21 May, 2025
Comments
Improve
Suggest changes
Like Article
Like
Report

A critical section is a segment of a program where shared resources, such as memory, files, or ports, are accessed by multiple processes or threads. To prevent issues like data inconsistency and race conditions, synchronization techniques ensure that only one process or thread accesses the critical section at a time.

The critical section includes operations on shared variables or resources that must be executed atomically to maintain data consistency. For example, reading from or writing to a shared file or modifying a global variable requires exclusive access.

In concurrent programming, if one process modifies shared data while another reads it simultaneously, the outcome can be unpredictable. Therefore, access to shared resources must be synchronized to ensure correct program behavior.

Structure of a Critical Section

Entry Section

  • The process requests permission to enter the critical section.
  • Synchronization tools (e.g., mutex, semaphore) are used to control access.

Critical Section

  • The actual code where shared resources are accessed or modified.

Exit Section

  • The process releases the lock or semaphore, allowing other processes to enter the critical section.

Remainder Section

  • The rest of the program that does not involve shared resource access.
os2
Critical Section Structure

Characteristics Of Critical Section

There are some properties that should be followed by any code in the critical section :

1. Mutual Exclusion

Only one process or thread can execute in the critical section at a given time. If two or more processes access shared resources (like variables or files) at the same time without control, data inconsistency or corruption may occur. If process Pi is executing in its critical section, then no other processes can be executing in their critical sections.

Example:
If two threads update the same bank account balance at the same time, the final result may be incorrect due to race conditions.

We can achieve it by : Using synchronization tools like mutexes, locks, or semaphores to ensure exclusive access.

2. Progress

If no process is executing in its critical section and some processes wish to enter their critical sections, then only those processes that are not executing in their remainder sections can participate in deciding which will enter its critical section next, and this selection cannot be postponed indefinitely.

It's important to avoid idle CPU cycles or deadlock situations where all processes are waiting, even when they don’t have to enter.

Example:
If process A finishes its work and leaves the critical section, process B (waiting to enter) should not be delayed unnecessarily due to faulty design or logic in the algorithm.

Goal: To ensure that the system continues to make progress and doesn't freeze or hang.

3. Bounded Waiting

There must be a limit on how many times other processes are allowed to enter the critical section before a waiting process gets its turn. It is important to prevent starvation, where one process waits indefinitely while others repeatedly enter the critical section.

Example:
If process A is always skipped in favor of processes B and C, then A might never enter, even if it's ready.

Solution: Implement fair scheduling policies like FIFO queues or ticket-based systems.

Handling Critical Section

Two general approaches are used to handle critical sections :

1. Preemptive Kernels : A preemptive kernel allows the operating system to interrupt or preempt a process even when it is running in kernel mode.

  • The OS can forcibly switch from one running process to another.
  • Even if a process is performing a system call or executing kernel code, it can be paused, and another process can be scheduled.

Advantages:

  • Better responsiveness, especially for real-time systems.
  • Higher CPU utilization and fairness among processes.

Disadvantages:

  • Increased complexity due to the need to manage race conditions and data consistency when kernel data structures are accessed by multiple processes.

Example Use Case: Modern desktop and server operating systems like Linux, Windows, and macOS use preemptive kernels for better multitasking.

2. Non-Preemptive Kernels : A non-preemptive kernel does not allow interruption of a process that is running in kernel mode. The CPU control is explicitly released by the process. The kernel ensures that only one process is active in the kernel at any given time. The process continues until it:

  • Exits the kernel,
  • Blocks (e.g., waits for I/O), or
  • Voluntarily yields the CPU.

Advantages:

  • Simplicity: Easier to program and maintain.
  • No race conditions on kernel data since access is automatically serialized.

Disadvantages:

  • Poor responsiveness, especially if a long-running kernel operation delays other processes.
  • Not suitable for real-time or interactive systems.

Example Use Case: Older operating systems or embedded systems where simplicity and reliability outweigh responsiveness.

Critical Section Problem

The use of critical sections in a program can cause a number of issues, including:

  • Deadlock: When two or more threads or processes wait for each other to release a critical section, it can result in a deadlock situation in which none of the threads or processes can move. Deadlocks can be difficult to detect and resolve, and they can have a significant impact on a program's performance and reliability.
  • Starvation: When a thread or process is repeatedly prevented from entering a critical section, it can result in starvation, in which the thread or process is unable to progress. This can happen if the critical section is held for an unusually long period of time, or if a high-priority thread or process is always given priority when entering the critical section.
  • Overhead: When using critical sections, threads or processes must acquire and release locks or semaphores, which can take time and resources. This may reduce the program's overall performance.

It could be visualized using the pseudo-code below

do{
flag=1;
while(flag); // (entry section)
// critical section
if (!flag)
// remainder section
} while(true);

Solution to Critical Section Problem : A simple solution to the critical section can be thought of as shown below,

acquireLock();
Process Critical Section
releaseLock();

A thread must acquire a lock prior to executing a critical section. The lock can be acquired by only one thread. There are various ways to implement locks in the above pseudo-code.

To read about detailed solution to Critical Section Problem read - Solution to Critical Section Problem

Examples of critical sections in real-world applications

Banking System (ATM or Online Banking)

  • Critical Section: Updating an account balance during a deposit or withdrawal.
  • Issue if not handled: Two simultaneous withdrawals could result in an incorrect final balance due to race conditions.

Ticket Booking System (Airlines, Movies, Trains)

  • Critical Section: Reserving the last available seat.
  • Issue if not handled: Two users may be shown the same available seat and both may book it, leading to overbooking.

Print Spooler in a Networked Printer

  • Critical Section: Sending print jobs to the printer queue.
  • Issue if not handled: Print jobs may get mixed up or skipped if multiple users send jobs simultaneously.

File Editing in Shared Documents (e.g., Google Docs, MS Word with shared access)

  • Critical Section: Saving or writing to the shared document.
  • Issue if not handled: Simultaneous edits could lead to conflicting versions or data loss.

Online Multiplayer Gaming Servers

  • Critical Section: Updating a player's score, health, or game state in real time.
  • Issue if not handled: Game logic becomes inconsistent; players may see outdated or incorrect data.

Inventory Management in E-Commerce

  • Critical Section: Reducing stock quantity when a product is purchased.
  • Issue if not handled: Overselling of items, customer dissatisfaction.

Advantages of Critical Section

  • Prevents race conditions: By ensuring that only one process can execute the critical section at a time, race conditions are prevented, ensuring consistency of shared data.
  • Provides mutual exclusion: Critical sections provide mutual exclusion to shared resources, preventing multiple processes from accessing the same resource simultaneously and causing synchronization-related issues.
  • Reduces CPU utilization: By allowing processes to wait without wasting CPU cycles, critical sections can reduce CPU utilization, improving overall system efficiency.
  • Simplifies synchronization: Critical sections simplify the synchronization of shared resources, as only one process can access the resource at a time, eliminating the need for more complex synchronization mechanisms.

Disadvantages of Critical Section

  • Overhead: Implementing critical sections using synchronization mechanisms like semaphores and mutexes can introduce additional overhead, slowing down program execution.
  • Deadlocks: Poorly implemented critical sections can lead to deadlocks, where multiple processes are waiting indefinitely for each other to release resources.
  • Can limit parallelism: If critical sections are too large or are executed frequently, they can limit the degree of parallelism in a program, reducing its overall performance.
  • Can cause contention: If multiple processes frequently access the same critical section, contention for the critical section can occur, reducing performance.

Critical Section in Operating System
Visit Course explore course icon
Next Article

Similar Reads