0% found this document useful (0 votes)
5 views

Chapter 3 OS

Uploaded by

dhairyathebest07
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Chapter 3 OS

Uploaded by

dhairyathebest07
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 72

MODULE 3

CONCURRENCY CONTROL
SYLLABUS
MULTIPLE PROCESSES
 Operating System design is concerned with the
management of processes and threads:
 Multiprogramming - Management of multiple
processes in uniprocessor system.

 Multiprocessing - Management of multiple


processes in multiprocessor system.

 Distributed Processing - Management of multiple


processes in Distributed system.
CONCURRENCY :
ARISES IN THREE DIFFERENT CONTEXTS
PRINCIPLES OF CONCURRENCY
 Interleaving and overlapping
 can be viewed as examples of concurrent
processing
 both present the same problems
 Uniprocessor – the relative speed of execution of
processes cannot be predicted
 depends on activities of other processes
 the way the OS handles interrupts
 scheduling policies of the OS
 Problem are same in multiprocessing system also.
PRINCIPLES OF CONCURRENCY
 Interleaving and overlapping
PRINCIPLES OF CONCURRENCY
 Interleaving and overlapping
DIFFICULTIES IN CONCURRENCY
 Sharing of global resources.

 Difficult for the OS to manage the allocation of


resources optimally.

 Difficult to locate programming errors as results are


not deterministic and reproducible.
SIMPLE EXAMPLE
 Consider the following procedure

void echo()
{
chin = getchar();
chout = chin;
putchar(chout);
}
SIMPLE EXAMPLE
 In single processor multiprogramming
system=>
 User jumps from one application to another.

 Each application uses same keyboard for input


and same screen for output.

 Single copy of echo procedure is used.

 Main memory is shared by all processes.


SIMPLE EXAMPLE
 Sharing leads to problems.
 Consider the following sequence:
1)Process P1 invokes echo procedure and interrupted
immediately after getchar returns its value and
stores it in chin. At this point, the most recently
entered character. X, is stored in variable chin.
2)Process P2 is activated and invokes the echo
procedure, which runs to conclusion, inputting
and then displaying a single character, y, on
the screen.
3)Process P1 is resumed. By this time, the value x has
been overwritten in chin and therefore lost. Instead,
chin contains y, which is transferred to chout and
displayed.
SIMPLE EXAMPLE
 Suppose we allow only one process to be in that
procedure.
 Sequence=>

1)Process P1 invokes echo procedure and interrupted


immediately after getchar returns its value and
stores it in chin. At this point, the most recently entered
character. X, is stored in variable chin.
2)Process P2 is activated and invokes the echo procedure.
However, because P1 is still inside the echo procedure,
although currently suspended, P2 is blocked from
entering the procedure.
3) At some later time, process P1 is resumed and completes
its execution. The proper character x is displayed.
4) When P1 exits echo, this removes the block on P2. When
P2 is later resumed, the echo procedure is successfully
SIMPLE EXAMPLE

 In multiprocessor system=>
 Same problem of protected shared resources arise

and same solution works.


 Suppose process P1 and P2 executing each on a

separate processor and both are sharing echo


procedure
SIMPLE EXAMPLE
SIMPLE EXAMPLE
Suppose we allow only one process to be in that
procedure.
Sequence=>
1) Process P1 and P2 executing each on a separate
processor.
2) Process P1 is inside echo procedure. Process P2
invokes the echo procedure. However, because P1 is
still inside the echo procedure, although currently
suspended, P2 is blocked from entering the
procedure.
3) At some later time, process P1 is resumed and
completes its execution. When P1 exits echo, this
removes the block on P2. When P2 is later resumed,
the echo procedure is successfully invoked.
RACE CONDITION
 Race condition: The situation where several
processes access – and manipulate shared data
concurrently. The final value of the shared data
depends upon which process finishes last.

 Example: Suppose two processes P1 and P2 share a


global variable ‘a’. At some point in its execution P1
updates ‘a’ to value 1 and at some point P2 updates
value of ‘a’ to 2. Thus two task are in a race to write
value to variable ‘a’. Here, the process that updates
last determines the value of variable ‘a’.

 To prevent race conditions, concurrent processes


must be synchronized.
THE CRITICAL-SECTION PROBLEM
 n processes all competing to use some shared data.

 Each process has a code segment, called critical


section, in which the shared data is accessed.

 Problem – ensure that when one process is


executing in its critical section, no other process is
allowed to execute in its critical section.
SOLUTION TO CRITICAL-SECTION PROBLEM
A solution to the critical-section problem must
satisfy the following three requirements:
1. Mutual Exclusion. If process Pi is executing in its
critical section, then no other processes can be
executing in their critical sections.

2. Progress. If no process is executing in its critical


section and there exist some processes that wish
to enter their critical section, then the selection of
the processes that will enter the critical section
next cannot be postponed indefinitely.
SOLUTION TO CRITICAL-SECTION PROBLEM
3. Bounded Waiting. A bound must exist on the
number of times that other processes are allowed to
enter their critical sections after a process has made
a request to enter its critical section and before that
request is granted.
Assume that each process executes at a nonzero
speed
No assumption concerning relative speed of the
n processes.
GENERAL STRUCTURE OF PROCESS
 General structure of process Pi (other process Pj)
 Processes may share some common variables to synchronize
their actions.
do {
entry
Section
critical section
//Contains access to shared variables or
other resources
Entry section
exit
Section

remainder section
//A thead may exit its execution in this
section.
GENERAL STRUCTURE OF PROCESS
 Explaination
Entry Section: Process will request to enter inside
critical section. The part of code decides whether
process can enter inside the critical section or
not.
Critical Section: Code segment that accesses
shared resources and that has to be executed as
an atomic action.
Exit Section: Locking section can be undone which
is done in critical section.
Remainder Section: Remaining part of the program.
Solutions to
Critical SecTion
Peterson’s Solution
PETERSON’S SOLUTION
 Peterson's solution is restricted to two processes that
alternate execution between their critical sections and
remainder sections.
 The processes are numbered Po and P1.
 Peterson's solution requires the two processes to share

two data items:


int turn;
boolean flag[2];
This is a classic software-based solution to the critical
section problem.
ALGORITHM FOR PI
 Algorithm for Pi
 Algorithm for Process Pj
Operating System Concepts

ALGORITHM
 Meets all three requirements; solves the critical-
section problem for two processes.

Process P0 Process P1
do { do {
flag [0]= T; flag [1]= T;
turn = 1; turn = 0;
while (flag [1] ==T && while (flag [0] ==T &&
turn ==1) ; turn ==0) ;
critical section critical section
flag [0] = F; flag [1] = F;
remainder section remainder section
} while (1); } while (1);
Operating System Concepts

ALGORITHM

 Meets all three requirements; solves the critical-


section problem for two processes.
 · Mutual Exclusion — is satisfied by the while loop.
 · Progress — is also satisfied as any process
requesting for critical section will get it when no
other process is in its critical section, as the while
loop is always waiting for any process.
 · Bounded Wait — Bounded wait is also satisfied, as
we are pointing our flag to false at the end of a
critical section.
Solutions to
Critical SecTion
Hardware Solution
SYNCHRONIZATION HARDWARE
 Many systems provide hardware support for critical
section code.
 Uniprocessors - could disable interrupts
 Currently running code would execute without
preemption
 Generally too inefficient on multiprocessor systems
 Operating systems using this not broadly scalable
 Modern machines provide special atomic hardware
instructions
 Atomic = non-interruptible
 Either test memory word and set value
 Or swap contents of two memory words
SYNCHRONIZATION HARDWARE
TestAndSet()=>
 Important characteristics:
 Two TestAndSet() instructions cannot be run in parallel

on two different CPUs.


 TestAndSet() instruction is executed automatically.

 Instructions cannot be interrupted.


SOLUTION TO CRITICAL-SECTION PROBLEM
USING LOCKS
General structure of a process:
do {
acquire lock
critical section
release lock
remainder section
} while (TRUE);
TESTANDSET INSTRUCTION
 Definition of TestAndSet()=>
boolean TestAndSet (boolean
*target)
{
boolean rv = *target;
*target = TRUE;
return rv;
}
Must be executed atomically
SOLUTION USING TESTANDSET
 Shared Boolean variable lock, initialized to false.
 Solution:
do {
while ( TestAndSet
(&lock )) ; // do nothing
// critical section
lock = FALSE;
// remainder section
} while (TRUE);
TESTANDSET INSTRUCTION
 Definition of TestAndSet()=>
boolean TestAndSet (boolean
*target)
{
boolean rv = *target;
*target = TRUE;
return rv;
&lock=>target=>address
} *target=>False=>lock
rv = *target = False
Set *target = True
return rv means return rv=False
TESTANDSET INSTRUCTION
 Mutual Exclusion: It is guaranteed by hardware.

 Progress: No blocking.

 Bounded wait: P1 and P2 two Processes.


P1 is using critical section again and again, P2 is
not getting chance to enter inside critical section.
So this condition is not met.
SYNCHRONIZATION HARDWARE
Swap()=>
 Important characteristics:

 Two swap() instructions cannot be run in parallel on

two different CPUs.


 swap() instruction is executed automatically.

 Instructions cannot be interrupted.


SWAP INSTRUCTION
 Definition:
void Swap (boolean *a, boolean *b)
{
boolean temp = *a;
*a = *b;
*b = temp:
}
SOLUTION USING SWAP
 Shared Boolean variable lock initialized to FALSE;
Each process has a local Boolean variable key
 Solution:
lock = False; Initiall After
do { y
key = TRUE; lock False True
while ( key == TRUE) key True False
{ Swap (&lock, &key ) } For entering
inside
//critical section
critical
lock = FALSE; section
// remainder section Lock=True
} while (TRUE); and key =
False
SWAP INSTRUCTION
 Mutual Exclusion: It is guaranteed by hardware.

 Progress: No blocking.

 Bounded wait: P1 and P2 two Processes.


P1 is using critical section again and again, P2 is
not getting chance to enter inside critical section.
So this condition is not met.
Solutions to
Critical SecTion
Semaphore
SEMAPHORE
 Synchronization tool that does not require busy waiting.
 A semaphore is an integer variable.
 Semaphore S – S is an integer variable.
 Initial value S=1 for solving critical section problem.
 Apart from initialization it is accessed through two
standard atomic operations:
1) wait() 2)signal() Originally called P() and V()
 Less complicated
SEMAPHORE
 Two indivisible (atomic) operations
 wait (S) {

while S <= 0 ; // no-op


S--; Decreme
} nts
Value
 signal (S)

{
S++;
} Increme
nts
Value
SEMAPHORE
 Example
 Suppose s=1
 Process P1 came.
do{ s=1 wait(s)=> s=0
wait(s) P1 enters in CS.
// critical P2 even came sm
time=>
section
s=0
signal(s) wait(s)=>s<=0=>T
remainder keep moving in loop.
section When P1 finishes=>
s=0 signal(s)=>s=1
}while(T) Now P2 can enter.
SEMAPHORE
 Mutual Exclusion- achieved.
 Progress- No blocking of process.
 Bounded wait – One process keep on entering
inside the CS and others won’t get chance. So
this is not achieved.
 Semaphore is a good solution as it fulfils two
important criteria's.
SEMAPHORE AS GENERAL
SYNCHRONIZATION TOOL
 Counting semaphore – integer value can range over an
unrestricted domain.

 Binary semaphore – integer value can range only


between 0 and 1; can be simpler to implement.
 Also known as mutex locks

 Can implement a counting semaphore S as a binary


semaphore
SEMAPHORE AS GENERAL
SYNCHRONIZATION TOOL
 Provides mutual exclusion
Semaphore mutex; // initialized to 1
do {
wait (mutex);
// Critical Section
signal (mutex);
// remainder section
} while (TRUE);
SEMAPHORE IMPLEMENTATION

 Must guarantee that no two processes can execute wait () and


signal() on the same semaphore at the same time.
 Thus, implementation becomes the critical section problem where
the wait and signal code are placed in the critical section.
 Could now have busy waiting in critical section implementation
 But implementation code is short
 Little busy waiting if critical section rarely occupied
 Note that applications may spend lots of time in critical sections and
therefore this is not a good solution.
SEMAPHORE IMPLEMENTATION WITH NO BUSY
WAITING
 With each semaphore there is an associated waiting queue. Each
entry in a waiting queue has two data items:
 value (of type integer)
 pointer to next record in the list

 Two operations:
 block – place the process invoking the operation on the

appropriate waiting queue.


 wakeup – remove one of processes in the waiting queue and

place it in the ready queue.


SEMAPHORE IMPLEMENTATION WITH NO BUSY
WAITING (CONT.)
 Implementation of wait:
wait(semaphore *S) {
S->value--;
if (S->value < 0) {
add this process to S->list;
block();
}
}
 Implementation of signal:
signal(semaphore *S) {
S->value++;
if (S->value <= 0) {
remove a process P from S->list;
wakeup(P);
} }
MONITORS

52
MONITORS
 A high-level abstraction that provides a convenient and effective
mechanism for process synchronization
 Abstract data type, internal variables only accessible by code within
the procedure
 Only one process may be active within the monitor at a time
 But not powerful enough to model some synchronization schemes

monitor monitor-name
{
// shared variable declarations
procedure P1 (…) { …. }

procedure Pn (…) {……}

Initialization code (…) { … }


}
}
CHARACTERISTICS OF MONITORS IN OS

A monitor in OS has the following characteristics:

 We can only run one program at a time inside the monitor.


 Monitors in an operating system are defined as a group of
methods and fields that are combined with a special type of
package in the OS.
 A program cannot access the monitor's internal variable if it is
running outside the monitor. However, a program can call the
monitor's functions.
 Monitors were created to make synchronization problems less
complicated.
 Monitors provide a high level of synchronization between
processes.
COMPONENTS OF MONITOR IN AN OPERATING SYSTEM

The monitor is made up of four primary parts:

 Initialization: The code for initialization is included in the


package, and we just need it once when creating the monitors.
 Private Data: It is a feature of the monitor in an operating
system to make the data private. It holds all of the monitor's
secret data, which includes private functions that may only be
utilized within the monitor. As a result, private fields and
functions are not visible outside of the monitor.
 Monitor Procedure: Procedures or functions that can be invoked
from outside of the monitor are known as monitor procedures.
 Monitor Entry Queue: Another important component of the
monitor is the Monitor Entry Queue. It contains all of the threads,
which are commonly referred to as procedures only.
SCHEMATIC VIEW OF A MONITOR
CONDITION VARIABLES
 condition x, y;
 Two operations are allowed on a condition
variable:
 x.wait() – a process that invokes the

operation is suspended until x.signal()


 x.signal() – resumes one of processes (if

any) that invoked x.wait()


 If no x.wait() on the variable, then it

has no effect on the variable


MONITOR WITH CONDITION VARIABLES
CLASSICAL PROBLEMS OF SYNCHRONIZATION

 Bounded-Buffer Problem

 Readers and Writers Problem


Semaphore
Bounded Buffer
problem
BOUNDED-BUFFER PROBLEM
 Also called as Producer consumer problem.
 N buffers, each can hold one item.

Produce Consum
r er
 Three issues=>
1) Producer has to make sure that at least one cell is empty.
2) Consumer has to make sure that at least one cell is filled.
3) At a time both can’t work with buffer.
BOUNDED-BUFFER PROBLEM
Three semaphores are taken:
 Semaphore mutex initialized to the value 1

 Semaphore full initialized to the value 0

 Semaphore empty initialized to the value N.


BOUNDED BUFFER PROBLEM (CONT.)
 The structure of the  The structure of the
producer process consumer process
do { do {
// produce an item in wait (full);
nextp wait (mutex);
wait (empty); // remove an item from
wait (mutex); buffer //to nextc
// add the item to the signal (mutex);
buffer signal (empty);
signal (mutex); // consume the item in
signal (full); nextc
} while (TRUE); } while (TRUE);
BOUNDED BUFFER PROBLEM (CONT.)
 The structure of the mutex =1 E=n=5
producer process
F=0
do {
One item produced.
// produce an item in
nextp E=5=>4
wait (E); mutex=1 => 0
wait (mutex); Item1=>suppose
x x
// add the item to the
buffer
mutex=0=>1
signal (mutex);
F=0=>1
signal (F);
} while (TRUE);
Same way second
element is produced.
Now E=>3 and F=2
BOUNDED BUFFER PROBLEM (CONT.)
 The structure of the
producer process x y
do {
// produce an item in mutex=1 F=2 E=3
nextp
wait (E);
wait (mutex);
// add the item to the
buffer
signal (mutex);
signal (F);
} while (TRUE);
BOUNDED BUFFER PROBLEM (CONT.)
 The structure of the
consumer process x y
do {
wait (full); mutex =1 E=3
wait (mutex); F=2
// remove an item from One item consumed.
buffer //to nextc F=2=>1
signal (mutex);
mutex = 1=>0
signal (empty);
Item removed=>y
// consume the item in
nextc Mutex=0=>1
} while (TRUE); E=4
x
BOUNDED BUFFER PROBLEM (CONT.)

 On Producer side=>E=0, F=5, mutex=1


wait(E)=> E is 0 so can’t be decremented.
 On Consumer side=>E=5, F=0, mutex=1

wait(F)=> F is 0 so can’t be decremented.


Semaphore
Reader-Writers
problem
READERS-WRITERS PROBLEM
 A data set is shared among a number of concurrent processes.
 Readers – only read the data set; they do not perform any

updates
 Writers – can both read and write
 Problem – allow multiple readers to read at the same time. Only
one single writer can access the shared data at the same time.
 Shared Data
 Data set
 Semaphore mutex initialized to 1 (controls access to readcount)
 Semaphore wrt initialized to 1 (writer access)
 Integer readcount initialized to 0 (how many processes are

reading object)
READERS-WRITERS PROBLEM (CONT.)
 The structure of a writer Initially wrt= 1
process
wrt = 1=>0
do {
Write opn is
wait (wrt) ;
performed.
// writing is performed
wrt=0=>1
signal (wrt) ;
} while (TRUE);
READERS-WRITERS PROBLEM (CONT.)
 Reader=>
 If reader1 is inside the CS and reader2
came, it is allowed.
 If reader1 is inside the CS and writer came,
it is not allowed.
 mutex=1 synchronized between different
readers.
READERS-WRITERS PROBLEM (CONT.)
 The structure of a reader process
do { First reader comes=>
wait (mutex) ; mutex = 1=>0
readcount ++ ;
if (readcount == 1) rc=0=>1
wait (wrt) ;
signal (mutex)
rc=1=>wrt=1=>0
// reading is performed Writer cant access CS.
wait (mutex) ;
readcount - - ; mutex = 1
if (readcount == 0) As mutex = 1 other
signal (wrt) ;
signal (mutex) ; reader can come and
} while (TRUE); enter for read opn.
rc = 2=>two readers
reading at the same
time.
READERS-WRITERS PROBLEM (CONT.)
 The structure of a reader process
do { Readers done with
wait (mutex) ;
readcount ++ ;
reading
if (readcount == 1)
wait (wrt) ;
mutex=1=>0
signal (mutex) rc =rc-1
// reading is performed
wait (mutex) ; If rc=0 then allow writer
readcount - - ;
if (readcount == 0) to enter CS by
signal (wrt) ; incrementing wrt to 1.
signal (mutex) ;
} while (TRUE); mutex=1

You might also like