0% found this document useful (0 votes)
5 views

Chapter05-new

This is an Operating Notes Chapter Number 5.

Uploaded by

hgull8490
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Chapter05-new

This is an Operating Notes Chapter Number 5.

Uploaded by

hgull8490
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 66

Operating Systems:

Internals and Design Principles, 6/E


William Stallings

Chapter 5
Concurrency: Mutual
Exclusion and
Synchronization

Dave Bremer
Otago Polytechnic, N.Z.
©2008, Prentice Hall
Learning Objectives
Roadmap
• Principles of Concurrency
• Mutual Exclusion: Hardware Support
• Semaphores
• Monitors
• Message Passing
• Readers/Writers Problem
Multiple Processes
• Central to the design of modern Operating
Systems is managing multiple processes/threads
– Multiprogramming: Multiple processes on a single
processor
– Multiprocessing: Multiple processes on multiple
processors
– Distributed Processing: Multiple processes on
multiple distributed processors
• Big Issue is Concurrency
– Managing the interaction of all of these processes
Concurrency
Arises in Three Different Contexts:

Multiple
Applications
Structured
Applications Operating
invented to allow System
processing time to be
shared among active extension of modular
Structure
applications design and structured
programming.
Application can be
programmed as multiple
processes or threads
OS themselves
implemented as a set of
processes or threads
Concurrency & Shared Data

• Concurrent processes may share data to


support communication, info exchange,...
• Threads in the same process can share
global address space
• Concurrent sharing may cause problems
Key Terms
Interleaving and
Overlapping Processes
• Earlier (Ch2) we saw that processes may
be interleaved on uniprocessors
Interleaving and
Overlapping Processes
• And not only interleaved but overlapped
on multi-processors
A Simple Example: Uni-Processor

void echo() • echo() is a global function


• P1 access echo() and enters x
{ and is blocked, time outs or
interrupted
chin = getchar(); • chin=x
• P2 get processor and access
echo(), enters y and return
chout = chin; from echo()
• chin=y
putchar(chout); • P1 resumes execution
• chin=?
} • chout=?
• List the output…………….
• What is the problem here?
Solution ?
A Simple Example:
On Multiprocessors
Process P1 Process P2
. .
chin = getchar(); .
. chin = getchar();
chout = chin; chout = chin;
putchar(chout); .
. putchar(chout);
. .
Solution ?
Enforce Single Access
• If we enforce a rule that only one process
may enter the function at a time then:
• P1 & P2 run on separate processors
• P1 enters echo first,
– P2 tries to enter but is blocked – P2 suspends
• P1 completes execution
– P2 resumes and executes echo
Race Condition
• A race condition occurs when
– Multiple processes or threads read and write
data items
– They do so in a way where the final result
depends on the order of execution of
instructions of processes.
Race Condition- data
inconsistency-Exp 1
Race Condition- data
inconsistency-Exp 2
• Global variable a
• Shared by P1 & P2
• At some point in time, P1 updates “a” to 1
• At some other point in time, P2 updates
“a” to 2
• Can we say that the two processes are in
race condition?
Race Condition- data
inconsistency-Exp 3
• Two process P1 & P2 share two global
variables b and c
• Initially, b=1, c=2
• At some point in time, P1 executes
b=b+c;
• At some point in time, P2 executes
c=b+c;
• Prove that this is a race condition?
Proof
• Initially, b=1, c=2
• (1) P1 executes b=b+c;
• (2) P2 executes c=b+c;
Order 12, b=3, c=3+2.
Order 21, b=4, c=3.
The final result depends on the order of
execution of instruction of the processes
Race Conditions
 A race condition is where multiple
processes/threads concurrently read and write
to a shared memory location and the result
depends on the order of the execution.

 The part of the program, in which race


conditions can occur, is called critical section.
Critical Sections
• A critical section is a piece of code that
accesses a shared resource (data structure or
device) that must not be concurrently accessed
by more than one thread of execution.

• The goal is to provide a mechanism by which


only one instance of a critical section is
executing for a particular shared resource.
Operating System
Concerns
• What design and management issues are raised
by the existence of concurrency?
• The OS must
– Keep track of various processes
– Allocate and de-allocate resources
– Protect the data and resources against
interference by other processes.
– Ensure that the function of a process and the
output it produces are independent of its
processing speed relative to other processes
• Subject of this chapter
Processes Interaction
Degree of awareness
• Processes unaware of each
others
• Process indirectly aware of each
others
• Process directly aware of each
others
Unaware of Each Other

• Independent processes not intended to work


together
• Example of multiprogramming of multiple
independent processes
• Although independent, competition for common
resources exist
• Two or more independent applications may want
access to the same disk/printer
• OS concern is to regulate these accesses
Indirectly aware of each others

• Aware of each other through some


common object, e.g., a common buffer,
variables, shared files or databases
• Not directly aware of each others via
PIDs
• Such processes exhibit cooperation in
sharing the common object
• Cannot communicate directly with
each other
Directly Aware
• Directly aware of each other via their PIDs
• Can communicate with each other via their
IDs
• These processes exhibit cooperation
Process Interaction
Competing Processes
• Recall processes unaware of each other
• No exchange of information, i.e., they are unable
to communicate
• Competing for a shared resource
• Problems:
(1) A process may slow down due to the execution
of a competing process.
(2) Never get access to the shared resource in
extreme casestarvation
Competition among
Processes for Resources
Three main control problems:
(1) Mutual Exclusion: Need for Mutual
Exclusion for a non-shareable “critical
resource”, e.g. printer
– Critical sections: Piece of code of a process
which uses the critical resource
Competition among
Processes for Resources
(1) Mutual exclusion
(2) Deadlock: Mutual exclusion might lead to
deadlock
• Two competing processes P1 & P2
• Request for resources R1 & R2
• P1 holds R1 and needs R2 for progression
• P2 holds R2 and needs R1 for progression
• Both are deadlocked indefinitely
Competition among
Processes for Resources
(1) Mutual Exclusion
(2) Deadlock
(3) Starvation: A process is denied access to a
shared resource
• P1, P2, P3competing processes
• Need periodic access to a shared resource R
• OS assigns: P1R then P3R then P1R
then P3R…………………..and so on….
• P2 is starved !!!
Illustration of Mutual Exclusion
Processes Indirectly Aware of each other

• Cooperation by Sharing
• Deadlock, Starvation & mutual exclusion still present
• Data inconsistency may arise
• Example: A bookkeeping application in which two data
items should be maintained such that: a=b
• Consider two processes, P1 and P2
– Normal order P1: a=a+2; b=b+2; P2: b=b*4; a=a*4;
– Abnormal orderP1: a=a+2; P2: b=b*4; P1:b=b+2; P2: a=a*4;
– Prove that the second case wont maintain the
relationship a=b.
Processes Aware of each other

• Cooperation by communication
– Methods provided by OS or language libraries
• No requirement of mutual exclusion
• Deadlock and starvation still possible
– Process A waits for B and B waits for A
– Process A, B want to communicate with C. B
never gets chance to communicate.
Requirements for
Mutual Exclusion
1. Only one process at a time is allowed in
the critical section for a resource
2. A process that halts in its noncritical
section must do so without interfering
with other processes
3. A process requiring access to its critical
section MUST NOT be delayed
indefinitely: No deadlock or starvation
Requirements for
Mutual Exclusion (1)
4. A process must not be delayed access to
a critical section when there is no other
process using it

5. No assumptions are made about relative


process speeds or number of processes

6. A process remains inside its critical


section for a finite time only
Roadmap
• Principles of Concurrency- Done
• Mutual Exclusion: Software Solutions.
– Deckker’s and Peterson’s Algorithms
– Students are advised to read Appendix A in Stalling’s Book
and submit the report (Assignment 4)- Due on 26 th Dec,
2017.
• Q1. Devise a “Correct Solution for Mutual exclusion based on
decker’s algorithm”?
• Q2. Study the Peterson’s solution and prove that it satisfies the 6
requirements of mutual exclusion?
– Also, read this article (Not part of assignment):
https://en.wikipedia.org/wiki/Edsger_W._Dijkstra
• Mutual Exclusion: Hardware Support
• Semaphores
• Monitors
• Message Passing
• Readers/Writers Problem
Dekker’s Algorithm-First Attempt

• Global Boolean variable turn (0 or 1)


• If turn==0, its process 0 turn to enter the critical section
• If turn==1, its process 1 turn to enter the critical section

Programming Reminder:
while (expression)
{ /*do nothing”/ } /*This will loop infinitely if the expression is true*/
Dekker’s Algorithm-First Attempt (contd..)

• Global Boolean variable turn (0 or 1)


• If turn==0, its process 0 turn to enter the critical section
• If turn==1, its process 1 turn to enter the critical section
• Problems  Important !!
– Busy waiting: One process waits on the condition to be true-
Problem?
– Strict alternations is required- How?
– If one process fails, other is blocked- How and fails where?
– Process access to CS depends on the relative speed of each
other (imagine if process 0 enters one time in its CS in a minute
while process 1 needs to enter 1000 times.
Dekker’s Algorithm-Second Attempt

• Boolean flag[2] a private key of each process


– Flag[0] is private key of process 0
– Flag[1] is private key of process 1
• Each process can read others key but can only change its own key
• For process 0 do nothing while flag[1]=true and vice versa for process 1
• If one process fails outside the critical section, other wont be blocked
• If one process fails inside the critical section, other will be blocked
• This solution does not always ensure the mutual exclusion  Prove.
Dekker’s Algorithm-Second Attempt (1)

• No strict alternation required


• Does not ensure the Mutual Exclusion
always, e.g., consider the following:
Dekker’s Algorithm-Third Attempt

• Ensures mutual exclusion


• No strict alternation required
• Problem of deadlock
– P0 sets flag[0]=true and preempted
– P1 sets flag[1]=true and proceed
– Deadlock occurs, no process can progress
• Students are required to devise a “correct solution”
based on Dekker’s algo-see assignment.
Peterson’s Algorithm
For Process P0

set flag [0] = true; // declare your interest to enter


set turn = 1; // assume it is the other’s turn-give P1 a chance
while (flag [ 1 ] && turn == 1) // do nothing
<critical section>
set flag [0] = false;
<remainder section>

For Process P1

set flag [1] = true; // declare your interest to enter


set turn = 0; // assume it is the other’s turn-give P0 a chance
while (flag [ 0 ] && turn == 0) ; // do nothing
<critical section>
set flag [1] = false;
<remainder section>
Peterson’s Algo- Properties

• Ensures Mutual Exclusion


• Ensure deadlock freedom(mutual blocking)
• Ensures starvation freedom
• Students are advised to study these properties
and prove that these are satisfied by Peterson’s
Algorithm- See Assignment 3.
Roadmap
• Principals of Concurrency- Done
• Mutual Exclusion: Software Solutions.
– Deckker’s and Peterson’s Algorithms
– Students are advised to read Appendix A in
Stalling’s Book and submit the report
(Assignment 4)
• Mutual Exclusion: Hardware Support
• Semaphores
• Monitors
• Message Passing
• Readers/Writers Problem
Disabling Interrupts
• Uniprocessors only allow interleaving
• Interrupt Disabling
– A process runs until it invokes an operating
system service or until it is interrupted
– Disabling interrupts guarantees mutual
exclusion
– Will not work in multiprocessor architecture
Pseudo-Code
while (true) {
/* disable interrupts */;
/* critical section */;
/* enable interrupts */;
/* remainder */;
}
Special Machine
Instructions
• Compare&Swap Instruction
– also called a “compare and exchange
instruction”
• Exchange Instruction
Compare&Swap
Instruction
int compare_and_swap (int *word,
int testval, int newval)
{
int oldval;
oldval = *word;
if (oldval == testval) *word = newval;
return oldval;
}
Mutual Exclusion (fig 5.2)
Exchange instruction
void exchange (int register, int
memory)
{
int temp;
temp = memory;
memory = register;
register = temp;
}
Exchange Instruction
(fig 5.2)
Hardware Mutual
Exclusion: Advantages
• Applicable to any number of processes on
either a single processor or multiple
processors sharing main memory
• It is simple and therefore easy to verify
• It can be used to support multiple critical
sections
Hardware Mutual
Exclusion: Disadvantages
• Busy-waiting consumes processor time
• Starvation is possible when a process
leaves a critical section and more than one
process is waiting.
– Some process could indefinitely be denied
access.
• Deadlock is possible
Roadmap
• Principals of Concurrency
• Mutual Exclusion: Software Solution
• Mutual Exclusion: Hardware Support
• Semaphores
• Monitors
• Message Passing
• Readers/Writers Problem
Semaphore
Semaphore-A synchronization tool:
• A flag used to indicate that a routine cannot proceed if a shared
resource is already in use by another routine.
• Semaphore S – integer variable
– The initial value of s depends on its application-remember this point.
• Two standard operations modify S: wait() and signal()
– Originally called P() and V()
• Less complicated
• Can only be accessed via two indivisible (atomic) operations
– wait (S) {
while (S <= 0)
{ //no operation} Atomic
S--; operation
}
– signal (S) {
S++; Atomic
} operation
Semaphore as General Synchronization Tool

• Two types:
– Counting semaphore – integer value can range over an
unrestricted domain
– Binary semaphore – integer value can range only between 0 and 1;
can be simpler to implement
• Also known as mutex locks (they are locks that provide mutual
exclusion)

• Semaphores are used for many concurrency problems on a


multiprocessor system
• We will discuss the following three uses of semaphores:
– Critical Section problem Mutual exclusion  Mutex
– Ensuring access to n instances of a resource  Counting semaphores
– Process serialization problem
Semaphores Usage
Usage1:
• Binary semaphores can solve the critical-section problem for n
processes.
• The n processes share a semaphore, mutex, initialized to 1.
boolean mutex=1; //initialized to 1
• Each process Pi is organized as:
wait (mutex) {
Pi while (mutex <= 0)
{// no-op}
wait (mutex); mutex--;
<Critical Section> }
signal (mutex); signal (mutex) {
< remainder section> mutex++;
}

• Only one process is allowed into Critical Section (mutual exclusion).


Problem & Solution ?
Problem- Busy waiting
• Busy waiting, which wastes CPU
cycles that some other process
might be able to use productively.
• This type of semaphore is also
called a spinlock because the
process “spins” while waiting for
the lock.
Binary Semaphore – No Busy Waiting

Pi (Pi=1, 2, 3,
….,n) boolean mutex=1; //initialized to 1
Pi calls wait
Wait();
<Critical Section> wait (mutex) {
Signal (); if (mutex==1)
Pi calls calls signal mutex==0;
< remainder
section> else
Op: place this process in block queue of mutex
}
signal (mutex) {
if (mutex queue is empty)
mutex==1;
else
Op: remove one process from mutex queue
}
Counting Semaphores

Usage 2: Resource Usage int s=10; //initialize


• Assume that there are n instances of semaphore
a resource R
wait(s)
• Suppose P processes are going to {
access the resource in a s--;
multiprocessing system if (s<0)
• Counting semaphore S can be used //Block this process and
to ensure that processes, in need of place in queue
access, get access if at least one }
instance of R is available
• If no instance of the resource R is Signal(s)
available, block the requesting {
process s++;
• Blocked process(s) can access the if(s<=0)
resource when there is one available // Remove a process from
block queue
}
Strong/Weak
Semaphore
• A queue is used to hold processes waiting
on the semaphore
– In what order are processes removed from
the queue?
• Strong Semaphores use FIFO
• Weak Semaphores don’t specify the
order of removal from the queue
Semaphores Usage
Usage3:
• To solve serialization problems.
• Example:
– Two concurrent processes: P1 and P2
– Statement S1 in P1 needs to be performed before statement S2 in P2
• Need to make P2 wait until P1 tells “it is OK to proceed”
– Define a semaphore “synch”  Initialize synch to 0.

boolean synch //initialized to 0


– In P2: wait(synch);
wait (synch) {
S2; while (synch<= 0)
{ // no-op}
synch--;
}
– And in P1: S1 ;
signal(synch); signal (synch) {
synch++;
}
Classical Problems of Synchronization

 Bounded-Buffer Problem
 Bounded buffers: Producer /Consumer can be seen in e.g.
streaming filters or packet switching in networks.

 Readers and Writers Problem


 Database readers and writers: online reservation systems; file
systems.

 Dining-Philosophers Problem
 Dining philosophers could be a sequence of active database
transactions that have a circular wait-for-lock dependence.

 The Sleeping Barber problem


 Sleeping Barber is often generally thought of as a client-server
relationship.
Reading Assignment
• Study & discuss the classical problems of
synchronization
• Solve these problems using semaphores
• You solution should ensure:
– Mutual exclusion
– No busy waiting
– No deadlock
– No starvation

You might also like