0% found this document useful (0 votes)
2 views

lecture 5

The document discusses process synchronization, focusing on message passing, buffering, and the critical section problem. It introduces concepts like blocking and non-blocking communication, race conditions, and solutions such as Peterson's solution and semaphores for managing concurrent access to shared resources. The document emphasizes the importance of mutual exclusion, progress, and bounded waiting in ensuring data consistency during concurrent process execution.

Uploaded by

yushahabib25
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

lecture 5

The document discusses process synchronization, focusing on message passing, buffering, and the critical section problem. It introduces concepts like blocking and non-blocking communication, race conditions, and solutions such as Peterson's solution and semaphores for managing concurrent access to shared resources. The document emphasizes the importance of mutual exclusion, progress, and bounded waiting in ensuring data consistency during concurrent process execution.

Uploaded by

yushahabib25
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 21

PROCESS SYNCHRONIZATION

LECTURE 5
SYNCHRONIZATION

Message passing may be either blocking or non-blocking


Blocking: is considered synchronous
 Blocking send has the sender block until the message is received
 Blocking receive has the receiver block until a message is available

Non-blocking: is considered asynchronous


 Non-blocking send has the sender send the message and continue
 Non-blocking receive has the receiver receive a valid message or null
BUFFERING

Queue of messages attached to the link; implemented in one of three ways


 1.Zero capacity –0 messages Sender must wait for receiver
 2.Bounded capacity –finite length of n messages Sender must wait if link full
 3.Unbounded capacity –infinite length Sender never waits
PROCESS SYNCHRONIZATION
BACKGROUND

 „Concurrent access to shared data may result in data inconsistency „


 Maintaining data consistency requires mechanisms to ensure the orderly
execution of cooperating processes „
 Suppose that we wanted to provide a solution to the consumer-producer
problem that fills all the buffers. We can do so by having an integer count that
keeps track of the number of full buffers. Initially, count is set to 0. It is
incremented by the producer after it produces a new buffer and is decremented
by the consumer after it consumes a buffer
PRODUCER

while (true)
{
/* produce an item and put in next Produced*/
 while (count == BUFFER_SIZE) ; // do nothing
 buffer [in] = next Produced;
 in = (in + 1);
 count++;

}
CONSUMER

 while (true)

{
while (count == 0) ; // do nothing next
Consumed= buffer[out];
out = (out + 1);
count--;
/* consume the item in next Consumed
}
RACE CONDITION

count++could be implemented as
 register1 = count
 register1 = register1 + 1
 count = register1 „

count--could be implemented as
 register2 = count
 register2 = register2 -1
 count = register2 „
CONTINUE…

Consider this execution interleaving with “count = 5”initially:


 S0: producer execute register1 = count{register1 = 5}
 S1: producer execute register1 = register1 + 1 {register1 = 6}
 S2: consumer execute register2 = count{register2 = 5}
 S3: consumer execute register2 = register2 -1{register2 = 4}
 S4: producer execute count = register1{count = 6 }
 S5: consumer execute count = register2{count = 4}
CRITICAL SECTION PROBLEM

 1.Mutual Exclusion- If process Pi is executing in its critical section, then no


other processes can be executing in their critical sections
 2.Progress- If no process is executing in its critical section and there exist some
processes that wish to enter their critical section, then the selection of the
processes that will enter the critical section next cannot be postponed
indefinitely
 3.Bounded Waiting- A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has made a
request to enter its critical section and before that request is granted
 Assume that each process executes at a nonzero speed
SOLUTION TO CRITICAL SECTION PROBLEM
PETERSON’S SOLUTION

Two process solution


Assume that the LOAD and STORE instructions are atomic; that is, cannot be
interrupted.
„The two processes share two variables:
 Int turn;
 Boolean flag[2] „

The variable turn indicates whose turn it is to enter the critical section.
The flag array is used to indicate if a process is ready to enter the critical section.
flag[i] = true implies that process Pi is ready!
PROCESS ALGORITHM FOR PROCESS PI
 while (true)

{
 flag[i] = TRUE;
 turn = j;
 while ( flag[j]==TRUE&& turn == j);

 CRITICAL SECTION
 flag[i] = FALSE;

 REMAINDER SECTION

}
SYNCHRONIZATION HARDWARE

Many systems provide hardware support for critical section code


 Uniprocessors–could disable interrupts
 Currently running code would execute without preemption
 Generally too inefficient on multiprocessor systems

Operating systems using this not broadly scalable „Modern machines provide
special atomic hardware instructions
Atomic = non-interruptable
 Either test memory word and set value
 Or swap contents of two memory words
SEMAPHORE
SEMAPHORE

 In computer science, a semaphore is a variable or abstract data type used to


control access to a common resource by multiple processes in a concurrent
system such as a multitasking operating system.
SEMAPHORE

 Synchronization tool that does not require busy waiting „Semaphore S–integer variable „
 Two standard operations modify S: wait()and signal()
 Originally called P() and V() „Less complicated „Can only be accessed via two indivisible
(atomic) operations
 wait (S)

{
while S <= 0 ; // no-op
S--;
}
signal (S)
{
S++;
}
SEMAPHORE AS GENERAL SYNCHRONIZATION TOOL

 Counting semaphore –integer value can range over an unrestricted domain „


 Binary semaphore –integer value can range only between 0 and 1; can be
simpler to implement
SEMAPHORE IMPLEMENTATION

Must guarantee that no two processes can execute wait ()and signal ()on the same
semaphore at the same time „
 Thus, implementation becomes the critical section problem where the wait and
signal code are placed in the critical section.
 Could now have busy waiting in critical section implementation
 But implementation code is short
 Little busy waiting if critical section rarely occupied „

Note that applications may spend lots of time in critical sections and therefore this
is not a good solution.
SEMAPHORE IMPLEMENTATION WITH NO BUSY WAITING

 With each semaphore there is an associated waiting queue.


 Each entry in a waiting queue has two data items:
 value (of type integer)
 pointer to next record in the list
 „Two operations:
 block–place the process invoking the operation on the appropriate waiting
queue.
 wakeup –remove one of processes in the waiting queue and place it in the ready
queue.

You might also like