0% found this document useful (0 votes)
8 views15 pages

Unit 3 Notes

Unit 3 discusses process synchronization, focusing on the need for co-operating processes to maintain data consistency and avoid race conditions. It covers critical-section problems, solutions like Peterson's algorithm, semaphores, and classic synchronization problems such as the bounded-buffer, readers-writers, and dining-philosophers problems. Additionally, it highlights the use of monitors as a high-level synchronization construct to prevent common errors associated with semaphores.

Uploaded by

siva777sai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views15 pages

Unit 3 Notes

Unit 3 discusses process synchronization, focusing on the need for co-operating processes to maintain data consistency and avoid race conditions. It covers critical-section problems, solutions like Peterson's algorithm, semaphores, and classic synchronization problems such as the bounded-buffer, readers-writers, and dining-philosophers problems. Additionally, it highlights the use of monitors as a high-level synchronization construct to prevent common errors associated with semaphores.

Uploaded by

siva777sai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

Unit 3 PROCESS SYNCHRONIZATION

Synchronization
• Co-operating process is one that can affect or be affected by other processes.
• Co-operating processes may either
→ share a logical address-space (i.e. code & data) or
→ share data through files or
→ messages through threads.
• Concurrent-access to shared-data may result in data-inconsistency.
• To maintain data-consistency:
The orderly execution of co-operating processes is necessary.
• Suppose that we wanted to provide a solution to producer-consumer problem that fills all buffers.
We can do so by having an variable counter that keeps track of the no. of full buffers.
Initially, counter=0.
 counter is incremented by the producer after it produces a new buffer.
 counter is decremented by the consumer after it consumes a
buffer.
• Shared-data:

Producer Process: Consumer Process:

• A situation where several processes access & manipulate same data concurrently and the outcome
of the execution depends on particular order in which the access takes place, is called a race
condition.
• Example:counter++ could be implemented as: counter— may be implemented
as:

• Consider this execution interleaving with counter = 5 initially:

The value of counter may be either 4 or 6, where the correct result should be 5. This is an example for
race condition.
• To prevent race conditions, concurrent-processes must be synchronized.
Critical-Section Problem
• Critical-section is a segment-of-code in which a process may be
→ changing common variables
→ updating a table or
→ writing a file.
• Each process has a critical-section in which the shared-data is accessed.
• General structure of a typical process has following (Figure 2.12):
1) Entry-section
 Requests permission to enter the critical-section.
2) Critical-section
 Mutually exclusive in time i.e. no other process can execute in its critical-section.
3) Exit-section
 Follows the critical-section.
4) Remainder-section

Figure 2.12 General structure of a typical process

• Problem statement:
―Ensure that when one process is executing in its critical-section, no other process is to be allowed
to execute in its critical-section‖.
• A solution to the problem must satisfy the following 3 requirements:
1) Mutual Exclusion
 Only one process can be in its critical-section.
2) Progress
 Only processes that are not in their remainder-section can enter their critical-section, and the
selection of a process cannot be postponed indefinitely.
3) Bounded Waiting
 There must be a bound on the number of times that other processes are allowed to enter
their critical-sections after a process has made a request to enter its critical-section and before
the request is granted.
• Two approaches used to handle critical-sections:
1) Preemptive Kernels
 Allows a process to be preempted while it is running in kernel-mode.
2) Non-preemptive Kernels
 Does not allow a process running in kernel-mode to be preempted.

Peterson’s Solution
• This is a classic software-based solution to the critical-section problem.
• This is limited to 2 processes.
• The 2 processes alternate execution between
→ critical-sections and
→ remainder-sections.
• The 2 processes share 2 variables (Figure 2.13):

where turn = indicates whose turn it is to enter its critical-section.


(i.e., if turn==i, then process Pi is allowed to execute in its critical-section).
flag = used to indicate if a process is ready to enter its critical-section.
(i.e. if flag[i]=true, then Pi is ready to enter its critical-section).
Figure 2.13 The structure of process Pi in Peterson‘s solution

• To enter the critical-section,


→ firstly process Pi sets flag[i] to be true and
→ then sets turn to the value j.
• If both processes try to enter at the same time, turn will be set to both i and j at roughly the same
time.
• The final value of turn determines which of the 2 processes is allowed to enter its critical-section first.
• To prove that this solution is correct, we show that:
1) Mutual-exclusion is preserved.
2) The progress requirement is satisfied.
3) The bounded-waiting requirement is met.

Synchronization Hardware
Hardware based Solution for Critical-section Problem
• A lock is a simple tool used to solve the critical-section problem.
• Race conditions are prevented by following restriction (Figure 2.14).
―A process must acquire a lock before entering a critical-section.
The process releases the lock when it exits the critical-section‖.

Figure 2.14: Solution to the critical-section problem using locks

Hardware instructions for solving critical-section problem


• Modern systems provide special hardware instructions
→ to test & modify the content of a word atomically or
→ to swap the contents of 2 words atomically.
• Atomic-operation means an operation that completes in its entirety without interruption.

TestAndSet()
• This instruction is executed atomically (Figure 2.15).
• If two TestAndSet() are executed simultaneously (on different CPU), they will be executed
sequentially in some arbitrary order.
Figure 2.15 The definition of the test and set() instruction

TestAndSet with Mutual Exclusion


• If the machine supports the TestAndSet(), we can implement mutual-exclusion by declaring a
boolean variable lock, initialized to false (Figure 2.16).

Figure 2.16 Mutual-exclusion implementation with test and set()

Semaphores
• A semaphore is a synchronization-tool.
• It used to control access to shared-variables so that only one process may at any point in time
change the value of the shared-variable.
• A semaphore(S) is an integer-variable that is accessed only through 2 atomic-operations:
1) wait() and
2) signal().
• wait() is termed P ("to test").
signal() is termed V ("to increment").
• Definition of wait(): Definition of signal():

• When one process modifies the semaphore-value, no other process can simultaneously modify that
same semaphore-value.
• Also, in the case of wait(S), following 2 operations must be executed without interruption:
1) Testing of S(S<=0) and
2) Modification of S (S--)

Semaphore
Usage
Counting
Semaphore
• The value of a semaphore can range over an unrestricted domain
Binary Semaphore
• The value of a semaphore can range only between 0 and 1.
• On some systems, binary semaphores are known as mutex locks, as they are locks that provide
mutual-exclusion.

1) Solution for Critical-section Problem using Binary Semaphores


• Binary semaphores can be used to solve the critical-section problem for multiple processes.
• The ‗n‘ processes share a semaphore mutex initialized to 1 (Figure 2.20).
Figure 2.20 Mutual-exclusion implementation with semaphores

2) Use of Counting Semaphores


• Counting semaphores can be used to control access to a given resource consisting of a finite
number o£ instances.
• The semaphore is initialized to the number of resources available.
• Each process that wishes to use a resource performs a wait() operation on the semaphore (thereby
decrementing the count).
• When a process releases a resource, it performs a signal() operation (incrementing the count).
• When the count for the semaphore goes to 0, all resources are being used.
• After that, processes that wish to use a resource will block until the count becomes greater than 0.

3) Solving Synchronization Problems


• Semaphores can also be used to solve synchronization problems.
• For example, consider 2 concurrently running-processes:
P1 with a statement S1 and
P2 with a statement S2.
• Suppose we require that S2 be executed only after S1 has completed.
• We can implement this scheme readily
→ by letting P1 and P2 share a common semaphore synch initialized to 0, and
→ by inserting the following statements in process P1

and the following statements in process P2

• Because synch is initialized to 0, P2 will execute S2 only after P1 has invoked signal (synch), which is
after statement S1 has been executed.
Semaphore Implementation
• Main disadvantage of semaphore: Busy waiting.
• Busy waiting: While a process is in its critical-section, any other process that tries to enter its
critical-section must loop continuously in the entry code.
• Busy waiting wastes CPU cycles that some other process might be able to use productively.
• This type of semaphore is also called a spinlock (because the process "spins" while waiting for the
lock).
• To overcome busy waiting, we can modify the definition of the wait() and signal() as follows:
1) When a process executes the wait() and finds that the semaphore-value is not positive, it
must wait. However, rather than engaging in busy waiting, the process can block itself.
2) A process that is blocked (waiting on a semaphore S) should be restarted when some other
process executes a signal(). The process is restarted by a wakeup().
• We assume 2 simple operations:
1) block() suspends the process that invokes it.
2) wakeup(P) resumes the execution of a blocked process P.
• We define a semaphore as follows:

• Definition of wait(): Definition of signal():

• This (critical-section) problem can be solved in two ways:


1) In a uni-processor environment
¤ Inhibit interrupts when the wait and signal operations execute.
¤ Only current process executes, until interrupts are re-enabled & the scheduler regains
control.
2) In a multiprocessor environment
¤ Inhibiting interrupts doesn't work.
¤ Use the hardware / software solutions described above.

Deadlocks & Starvation


• Deadlock occurs when 2 or more processes are waiting indefinitely for an event that can be
caused by only one of the waiting processes.
• The event in question is the execution of a signal() operation.
• To illustrate this, consider 2 processes, Po and P1, each accessing 2 semaphores, S and Q. Let S and
Q be initialized to 1.

• Suppose that Po executes wait(S) and then P1 executes wait(Q).


When Po executes wait(Q), it must wait until P1 executes signal(Q).
Similarly, when P1 executes wait(S), it must wait until Po executes signal(S).
Since these signal() operations cannot be executed, Po & P1 are deadlocked.
• Starvation (indefinite blocking) is another problem related to deadlocks.
• Starvation is a situation in which processes wait indefinitely within the semaphore.
• Indefinite blocking may occur if we remove processes from the list associated with a semaphore in
LIFO (last-in, first-out) order.

Either write something worth reading or do something worth writing.


Classic Problems of Synchronization
1) Bounded-Buffer Problem
2) Readers and Writers Problem
3) Dining-Philosophers Problem

The Bounded-Buffer Problem


• The bounded-buffer problem is related to the producer consumer problem.
• There is a pool of n buffers, each capable of holding one item.
• Shared-data

where,
¤ mutex provides mutual-exclusion for accesses to the buffer-pool.
¤ empty counts the number of empty buffers.
¤ full counts the number of full buffers.
• The symmetry between the producer and the consumer.
¤ The producer produces full buffers for the consumer.
¤ The consumer produces empty buffers for the producer.
• Producer Process: Consumer Process:
The Readers-Writers Problem
• A data set is shared among a number of concurrent processes.
• Readers are processes which want to only read the database (DB).
Writers are processes which want to update (i.e. to read & write) the DB.
• Problem:
 Obviously, if 2 readers can access the shared-DB simultaneously without any problems.
 However, if a writer & other process (either a reader or a writer) access the shared-DB
simultaneously, problems may arise.
Solution:
 The writers must have exclusive access to the shared-DB while writing to the DB.
• Shared-data

where,
¤ mutex is used to ensure mutual-exclusion when the variable readcount is updated.
¤ wrt is common to both reader and writer processes.
wrt is used as a mutual-exclusion semaphore for the writers.
wrt is also used by the first/last reader that enters/exits the critical-section.
¤ readcount counts no. of processes currently reading the object.
Initialization
mutex = 1, wrt = 1, readcount = 0
Writer Process: Reader Process:

• The readers-writers problem and its solutions are used to provide reader-writer locks on some
systems.
• The mode of lock needs to be specified:
1) read mode
 When a process wishes to read shared-data, it requests the lock in read mode.
2) write mode
 When a process wishes to modify shared-data, it requests the lock in write mode.
• Multiple processes are permitted to concurrently acquire a lock in read
mode, but only one process may acquire the lock for writing.
• These locks are most useful in the following situations:
1) In applications where it is easy to identify
→ which processes only read shared-data and
→ which threads only write shared-data.
2) In applications that have more readers than writers.
The Dining-Philosophers Problem
• Problem statement:
 There are 5 philosophers with 5 chopsticks (semaphores).
 A philosopher is either eating (with two chopsticks) or thinking.
 The philosophers share a circular table (Figure 2.21).
 The table has
→ a bowl of rice in the center and
→ 5 single chopsticks.
 From time to time, a philosopher gets hungry and tries to pick up the 2 chopsticks that are
closest to her.
 A philosopher may pick up only one chopstick at a time.
 Obviously, she cannot pick up a chopstick that is already in the hand of a neighbor.
 When hungry philosopher has both her chopsticks at the same time, she eats without
releasing her chopsticks.
 When she is finished eating, she puts down both of her chopsticks and starts thinking again.
• Problem objective:
To allocate several resources among several processes in a deadlock-free & starvation-free manner.
• Solution:
 Represent each chopstick with a semaphore (Figure 2.22).
 A philosopher tries to grab a chopstick by executing a wait() on the semaphore.
 The philosopher releases her chopsticks by executing the signal() on the semaphores.
 This solution guarantees that no two neighbors are eating simultaneously.
 Shared-data
semaphore chopstick[5];
Initialization
chopstick[5]={1,1,1,1,1}.

Figure 2.21 Situation of dining philosophers Figure 2.22 The structure of philosopher

• Disadvantage:
1) Deadlock may occur if all 5 philosophers become hungry simultaneously and grab their left
chopstick. When each philosopher tries to grab her right chopstick, she will be delayed forever.
• Three possible remedies to the deadlock problem:
1) Allow at most 4 philosophers to be sitting simultaneously at the table.
2) Allow a philosopher to pick up her chopsticks only if both chopsticks are available.
3) Use an asymmetric solution; i.e. an odd philosopher picks up first her left chopstick and then
her right chopstick, whereas an even philosopher picks up her right chopstick and then her left
chopstick.

Monitors
• Monitor is a high-level synchronization construct.
• It provides a convenient and effective mechanism for process synchronization.
Need for Monitors
• When programmers use semaphores incorrectly, following types of errors may occur:
1) Suppose that a process interchanges the order in which the wait() and signal() operations on
the semaphore ―mutex‖ are executed, resulting in the following execution:
 In this situation, several processes may be executing in their critical-sections simultaneously,
violating the mutual-exclusion requirement.
2) Suppose that a process replaces signal(mutex) with wait(mutex). That is, it executes

 In this case, a deadlock will occur.


3) Suppose that a process omits the wait(mutex), or the signal(mutex), or both.
 In this case, either mutual-exclusion is violated or a deadlock will occur.
Monitors Usage
• A monitor type presents a set of programmer-defined operations that are provided to ensure
mutual-exclusion within the monitor.
• It also contains (Figure 2.23):
→ declaration of variables
→ bodies of procedures(or functions).
• A procedure defined within a monitor can access only those variables declared locally within the
monitor and its formal-parameters.
Similarly, the local-variables of a monitor can be accessed by only the local-procedures.

Figure 2.23 Syntax of a monitor

• Only one process at a time is active within the monitor (Figure 2.24).
• To allow a process to wait within the monitor, a condition variable must be declared, as

• Condition variable can only be used with the following 2 operations (Figure 2.25):
1) [Link]()
 This operation resumes exactly one suspended process. If no process is suspended, then the
signal operation has no effect.
2) [Link]()
 The process invoking this operation is suspended until another process invokes [Link]().

Figure 2.24 Schematic view of a monitor Figure 2.25 Monitor with condition variables

Always be a first-rate version of yourself, instead of a second-rate version of somebody else.


• Suppose when the [Link]() operation is invoked by a process P, there exists a suspended process Q
associated with condition x.
• Both processes can conceptually continue with their execution. Two possibilities exist:
1) Signal and wait
 P either waits until Q leaves the monitor or waits for another condition.
2) Signal and continue
 Q either waits until P leaves the monitor or waits for another condition.

Dining-Philosophers Solution Using Monitors


• The restriction is
 A philosopher may pick up her chopsticks only if both of them are available.
• Description of the solution:
1) The distribution of the chopsticks is controlled by the monitor dp (Figure 2.26).
2) Each philosopher, before starting to eat, must invoke the operation pickup(). This act may
result in the suspension of the philosopher process.
3) After the successful completion of the operation, the philosopher may eat.
4) Following this, the philosopher invokes the putdown() operation.
5) Thus, philosopher i must invoke the operations pickup() and putdown() in the following
sequence:

Figure 2.26 A monitor solution to the dining-philosopher problem


Implementing a Monitor using Semaphores
• A process
→ must execute wait(mutex) before entering the monitor and
→ must execute signal(mutex) after leaving the monitor.
• Variables used:
semaphore mutex; // (initially = 1)
semaphore next; // (initially = 0)
int next-count = 0;
where
¤ mutex is provided for each monitor.
¤ next is used a signaling process to wait until the resumed process either leaves or waits
¤ next-count is used to count the number of processes suspended
• Each external procedure F is replaced by

• Mutual-exclusion within a monitor is ensured.


• How condition variables are implemented ?
For each condition variable x, we have:
semaphore x-sem; // (initially = 0) int
x-count = 0;
• Definition of [Link]() Definition of [Link]()
Resuming Processes within a Monitor
• Problem:
If several processes are suspended, then how to determine which of the suspended processes
should be resumed next?
Solution-1: Use an FCFS ordering i.e. the process that has been waiting the longest is resumed first.
Solution-2: Use conditional–wait construct i.e. [Link](c)
¤ c is a integer expression evaluated when the wait operation is executed (Figure 2.27).
¤ Value of c (a priority number) is then stored with the name of the process that is
suspended.
¤ When [Link] is executed, process with smallest associated priority number is
resumed next.

Figure 2.27 A monitor to allocate a single resource

• ResourceAllocator monitor controls the allocation of a single resource among competing processes.
• Each process, when requesting an allocation of the resource, specifies the maximum time it plans to
use the resource.
• The monitor allocates the resource to the process that has the shortest time-allocation request.
• A process that needs to access the resource in question must observe the following sequence:

where R is an instance of type ResourceAllocator.


• Following problems can occur:
 A process might access a resource without first gaining access permission to the resource.
 A process might never release a resource once it has been granted access to the resource.
 A process might attempt to release a resource that it never requested.
 A process might request the same resource twice.

You might also like