0% found this document useful (0 votes)
1 views

Process_acs

Chapter 3 discusses the concept of processes in operating systems, detailing their execution, management, and communication. It covers process states, scheduling, creation, termination, and interprocess communication methods such as message passing and shared memory. The chapter emphasizes the importance of process management activities and the role of the operating system in handling multiple processes efficiently.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views

Process_acs

Chapter 3 discusses the concept of processes in operating systems, detailing their execution, management, and communication. It covers process states, scheduling, creation, termination, and interprocess communication methods such as message passing and shared memory. The chapter emphasizes the importance of process management activities and the role of the operating system in handling multiple processes efficiently.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 115

Processes

Concept
Chapter 3: Processes

 Process Concept
 Process Scheduling
 Operations on Processes
 Interprocess Communication
 Examples of IPC Systems
 Communication in Client-Server Systems
Process Concept

 An operating system executes a variety of programs:


 Batch system – jobs
 Time-shared systems – user programs or tasks
 Textbook uses the terms job and process almost interchangeably
 Process – a program in execution; process execution must progress in
sequential fashion
 Multiple parts
 The program code, also called text section

 Current activity including program counter, processor registers

 Stack containing temporary data

Function parameters, return addresses, local variables


Data section containing global variables
Heap containing memory dynamically allocated during run time
Process Concept (Cont.)

 Program ispassive entity stored on disk


(executable file), process is active
 Program becomes process when executable file
loaded into memory
 Execution of program started via GUI
mouse clicks, command line entry of its
name, etc
 One program can be several processes
 Consider multiple users executing the same
program
Process Management

 A process is a program in execution. It is a unit of work within the


system. Program is a passive entity, process is an active entity.
 Process needs resources to accomplish its task
 CPU, memory, I/O, files
 Initialization data
 Process termination requires reclaim of any reusable resources
 Single-threaded process has one program counter specifying location of
next instruction to execute
 Process executes instructions sequentially, one at a time, until completion
 Multi-threaded process has one program counter per thread
 Typically system has many processes, some user, some operating
system running concurrently on one or more CPUs
 Concurrency by multiplexing the CPUs among the processes / threads
Process Management Activities

The operating system is responsible for the following activities in


connection with process management:

 Creating and deleting both user and system


processes
 Suspending and resuming processes
 Providing mechanisms for process
synchronization
 Providing mechanisms for process
communication
 Providing mechanisms for deadlock handling
Process in Memory
Process State

 As a process executes, it changes state


 new: The process is being created
 running: Instructions are being executed
 waiting: The process is waiting for some event to
occur
 ready: The process is waiting to be assigned to a
processor
 terminated: The process has finished execution
Diagram of Process State
Process Control Block (PCB)

Information associated with each process


(also called task control block)
 Process state – running, waiting, etc
 Program counter – location of instruction to
next execute
 CPU registers – contents of all process-
centric registers
 CPU scheduling information- priorities,
scheduling queue pointers
 Memory-management information – memory
allocated to the process
 Accounting information – CPU used, clock
time elapsed since start, time limits
 I/O status information – I/O devices allocated
to process, list of open files
CPU Switch From Process to Process
Process Scheduling

 Maximize CPU use, quickly switch processes


onto CPU for time sharing
 Process scheduler selects among available
processes for next execution on CPU
 Maintains scheduling queues of processes
 Job queue – set of all processes in the system
 Ready queue – set of all processes residing in main
memory, ready and waiting to execute
 Device queues – set of processes waiting for an I/O
device
 Processes migrate among the various queues
Representation of Process Scheduling

■ Queueing diagram represents queues, resources, flows


Schedulers

 Short-term scheduler (or CPU scheduler) – selects which process should be


executed next and allocates CPU
 Sometimes the only scheduler in a system

 Short-term scheduler is invoked frequently (milliseconds)  (must be fast)

 Long-term scheduler (or job scheduler) – selects which processes should be


brought into the ready queue
 Long-term scheduler is invoked infrequently (seconds, minutes)  (may be slow)

 The long-term scheduler controls the degree of multiprogramming

 Processes can be described as either:


 I/O-bound process – spends more time doing I/O than computations, many
short CPU bursts
 CPU-bound process – spends more time doing computations; few very long CPU
bursts
 Long-term scheduler strives for good process mix
Addition of Medium Term Scheduling

■ Medium-term scheduler can be added if degree of multiple


programming needs to decrease
● Remove process from memory, store on disk, bring back in
from disk to continue execution: swapping
Context Switch

 When CPU switches to another process, the system


must save the state of the old process and load the
saved state for the new process via a context switch
 Context of a process represented in the PCB
 Context-switch time is overhead; the system does no
useful work while switching
 The more complex the OS and the PCB  the longer the
context switch
 Time dependent on hardware support
 Some hardware provides multiple sets of registers per CPU 
multiple contexts loaded at once
Operations on Processes

 System must provide mechanisms for:


 process creation,
 process termination,
 and so on as detailed next
Process Creation

 Parent process create children processes, which, in turn create


other processes, forming a tree of processes
 Generally, process identified and managed via a process
identifier (pid)
 Resource sharing options
 Parent and children share all resources

 Children share subset of parent’s resources

 Parent and child share no resources

 Execution options
 Parent and children execute concurrently

 Parent waits until children terminate


A Tree of Processes in Linux
Process Creation (Cont.)

 Address space
 Child duplicate of parent
 Child has a program loaded into it

 UNIX examples
 fork() system call creates new process
 exec() system call used after a fork() to replace the
process’ memory space with a new program
Process Termination

 Process executes last statement and then asks the


operating system to delete it using the exit() system
call.
 Returns status data from child to parent (via wait())
 Process’ resources are deallocated by operating system
 Parent may terminate the execution of children
processes using the abort() system call. Some
reasons for doing so:
 Child has exceeded allocated resources
 Task assigned to child is no longer required
 The parent is exiting and the operating systems does not allow
a child to continue if its parent terminates
Process Termination

 Some operating systems do not allow child to exists if its parent


has terminated. If a process terminates, then all its children must
also be terminated.
 cascading termination. All children, grandchildren, etc. are terminated.
 The termination is initiated by the operating system.
 The parent process may wait for termination of a child process by
using the wait()system call. The call returns status information
and the pid of the terminated process
pid = wait(&status);
 If no parent waiting (did not invoke wait()) process is a zombie
 If parent terminated without invoking wait , process is an orphan
Interprocess Communication

 Processes within a system may be independent or cooperating


 Cooperating process can affect or be affected by other processes, including
sharing data
 Reasons for cooperating processes:
 Information sharing

 Computation speedup

 Modularity

 Convenience

 Cooperating processes need interprocess communication (IPC)


 Two models of IPC
 Shared memory

 Message passing
Communications Models

(a) Message passing. b) shared memory.


Cooperating Processes

 Independent process cannot affect or be


affected by the execution of another process
 Cooperating process can affect or be affected
by the execution of another process
 Advantages of process cooperation
 Information sharing
 Computation speed-up
 Modularity
 Convenience
Producer-Consumer Problem

 Paradigm for cooperating processes, producer


process produces information that is consumed by a
consumer process
 unbounded-buffer places no practical limit on the size of the
buffer
 bounded-buffer assumes that there is a fixed buffer size
Bounded-Buffer – Shared-Memory Solution

 Shared data
#define BUFFER_SIZE 10
typedef struct {
...
} item;

item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;

 Solution is correct, but can only use BUFFER_SIZE-1 elements


Bounded-Buffer – Producer

item next_produced;
while (true) {
/* produce an item in next produced */
while (((in + 1) % BUFFER_SIZE) == out)
; /* do nothing */
buffer[in] = next_produced;
in = (in + 1) % BUFFER_SIZE;
}
Bounded Buffer – Consumer

item next_consumed;
while (true) {
while (in == out)
; /* do nothing */
next_consumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;

/* consume the item in next consumed */


}
Interprocess Communication – Shared Memory

 An area of memory shared among the processes


that wish to communicate
 The communication is under the control of the
users processes not the operating system.
 Major issues is to provide mechanism that will
allow the user processes to synchronize their
actions when they access shared memory.
 Synchronization is discussed in great details in
Chapter 5.
Interprocess Communication – Message Passing

 Mechanism for processes to communicate


and to synchronize their actions
 Message system – processes communicate
with each other without resorting to shared
variables
 IPC facility provides two operations:
 send( message)
 receive( message)

 The message size is either fixed or variable


Message Passing (Cont.)

 If processes P and Q wish to communicate, they need to:


 Establish a communication link between them
 Exchange messages via send/receive

 Implementation issues:
 How are links established?

 Can a link be associated with more than two processes?

 How many links can there be between every pair of


communicating processes?
 What is the capacity of a link?

 Is the size of a message that the link can accommodate fixed or


variable?
 Is a link unidirectional or bi-directional?
Message Passing (Cont.)

 Implementation of communication link


 Physical:
Shared memory
Hardware bus
Network
 Logical:
Direct or indirect
Synchronous or asynchronous
Automatic or explicit buffering
Direct Communication

 Processes must name each other explicitly:


 send ( P, message) – send a message to process P
 receive( Q, message) – receive a message from process
Q
 Properties of communication link
 Links are established automatically
 A link is associated with exactly one pair of
communicating processes
 Between each pair there exists exactly one link
 The link may be unidirectional, but is usually bi-
directional
Indirect Communication

 Messages are directed and received from


mailboxes (also referred to as ports)
 Each mailbox has a unique id
 Processes can communicate only if they share a mailbox
 Properties of communication link
 Link established only if processes share a common
mailbox
 A link may be associated with many processes
 Each pair of processes may share several
communication links
 Link may be unidirectional or bi-directional
Indirect Communication

 Operations
 create a new mailbox (port)
 send and receive messages through mailbox
 destroy a mailbox

 Primitives are defined as:


send(A, message) – send a message to
mailbox A
receive(A, message) – receive a message
from mailbox A
Synchronization

Message passing may be either blocking or non-blocking


Blocking is considered synchronous
◦ Blocking send -- the sender is blocked until the message is received
◦ Blocking receive -- the receiver is blocked until a message is available
Non-blocking is considered asynchronous
◦ Non-blocking send -- the sender sends the message and continue
◦ Non-blocking receive -- the receiver receives:
● A valid message, or
● Null message
■ Different combinations possible
● If both send and receive are blocking, we have a rendezvous
Synchronization (Cont.)

■ Producer-consumer becomes trivial

message next_produced;
while (true) {
/* produce an item in next produced */
send(next_produced);
}
message next_consumed;
while (true) {
receive(next_consumed);

/* consume the item in next consumed */


}
Buffering

 Queue of messages attached to the link.


 implemented in one of three ways
1. Zero capacity – no messages are queued on a link.
Sender must wait for receiver (rendezvous)
2. Bounded capacity – finite length of n messages
Sender must wait if link full
3. Unbounded capacity – infinite length
Sender never waits
Examples of IPC Systems - Mach

 Mach communication is message based


 Even system calls are messages

 Each task gets two mailboxes at creation- Kernel and Notify

 Only three system calls needed for message transfer

msg_send(), msg_receive(), msg_rpc()


 Mailboxes needed for commuication, created via

port_allocate()
 Send and receive are flexible, for example four options if mailbox
full:
Wait indefinitely
Wait at most n milliseconds
Return immediately
Temporarily cache a message
Examples of IPC Systems – Windows

 Message-passing centric via advanced local procedure


call (LPC) facility
 Only works between processes on the same system
 Uses ports (like mailboxes) to establish and maintain
communication channels
 Communication works as follows:
The client opens a handle to the subsystem’s connection port
object.
The client sends a connection request.
The server creates two private communication ports and
returns the handle to one of them to the client.
The client and server use the corresponding port handle to send
messages or callbacks and to listen for replies.
Local Procedure Calls in Windows
Communications in Client-Server Systems

 Sockets
 Remote Procedure Calls
 Pipes
 Remote Method Invocation (Java)
Ordinary Pipes

■ Ordinary Pipes allow communication in standard producer-


consumer style
■ Producer writes to one end (the write-end of the pipe)
■ Consumer reads from the other end (the read-end of the pipe)
■ Ordinary pipes are therefore unidirectional
■ Require parent-child relationship between communicating
processes
Named Pipes

 Named Pipes are more powerful than


ordinary pipes
 Communication is bidirectional
 No parent-child relationship is necessary
between the communicating processes
 Several processes can use the named pipe
for communication
 Provided on both UNIX and Windows
systems
Question on process states

 Consider a system with n CPU’s m-processes where


n>=1 and m>n. Calculate lower bound and upper
bound on the process states?
 READY
 RUNNING
 BLOCK STATE
Question on process states

SOLUTION:
STATE MINIMUM MAXIMUM

READY 0/1 ? m

RUNNING 0 n

BLOCK 0 m
Threads
Motivation

 Most modern applications are multithreaded


 Threads run within application
 Multiple tasks with the application can be implemented
by separate threads
 Update display
 Fetch data
 Spell checking
 Answer a network request
 Process creation is heavy-weight while thread creation
is light-weight
 Can simplify code, increase efficiency
 Kernels are generally multithreaded
Benefits

 Responsiveness – may allow continued execution if


part of process is blocked, especially important for
user interfaces
 Resource Sharing – threads share resources of
process, easier than shared memory or message
passing
 Economy – cheaper than process creation, thread
switching lower overhead than context switching
 Scalability – process can take advantage of
multiprocessor architectures
Multicore Programming

 Multi-core or multiprocessor systems putting pressure


on programmers, challenges include:
 Dividing activities
 Balance
 Data splitting
 Data dependency
 Testing and debugging
 Parallelism implies a system can perform more than one
task simultaneously
 Concurrency supports more than one task making
progress
 Single processor / core, scheduler providing concurrency
Multicore Programming (Cont.)

 Types of parallelism
 Data parallelism – distributes subsets of the same data across
multiple cores, same operation on each
 Task parallelism – distributing threads across cores, each thread
performing unique operation
 As # of threads grows, so does architectural support
for threading
 CPUs have cores as well as hardware threads
 Consider Oracle SPARC T4 with 8 cores, and 8 hardware threads
per core
Concurrency vs. Parallelism

■ Concurrent execution on single-core system:

■ Parallelism on a multi-core system:


Single and Multithreaded Processes
User Threads and Kernel Threads

 User threads - management done by user-level threads library


 Three primary thread libraries:
 POSIX Pthreads
 Windows threads
 Java threads

 Kernel threads - Supported by the Kernel


 Examples – virtually all general purpose operating systems,
including:
 Windows
 Solaris
 Linux
 Tru64 UNIX
 Mac OS X
Multithreading Models

 Many-to-One

 One-to-One

 Many-to-Many
Many-to-One

 Many user-level threads


mapped to single kernel thread
 One thread blocking causes all
to block
 Multiple threads may not run in
parallel on muticore system
because only one may be in
kernel at a time
 Few systems currently use this
model
 Examples:
 Solaris Green Threads
 GNU Portable Threads
One-to-One

 Each user-level thread maps to kernel


thread
 Creating a user-level thread creates a
kernel thread
 More concurrency than many-to-one
 Number of threads per process
sometimes restricted due to overhead
 Examples
 Windows
 Linux
 Solaris 9 and later
Many-to-Many Model

 Allows many user level


threads to be mapped to
many kernel threads
 Allows the operating system
to create a sufficient number
of kernel threads
 Solaris prior to version 9
 Windows with the
ThreadFiber package
How Process Synchronization Works?
For Example, process A changing the data in a memory
location while another process B is trying to read the data
from the same memory location. There is a high probability
that data read by the second process will be erroneous.
Race condition

 Race condition
 Two or more processes are reading or writing some shared
data and the final result depends on who runs precisely when
Critical Section Problem
 Critical region
 Part of the program where the shared memory is accessed

 Mutual exclusion
 Prohibit more than one process from reading and writing the
shared data at the same time
Critical Section Problem

 Consider system of n processes {p0, p1, … pn-1}


 Each process has critical section segment of code
 Process may be changing common variables, updating table,
writing file, etc
 When one process in critical section, no other may be in its
critical section
 Critical section problem is to design protocol to
solve this
 Each process must ask permission to enter critical
section in entry section, may follow critical section
with exit section, then remainder section
Sections of a Program

 Here, are four essential elements of the critical section:


 Entry Section: It is part of the process which decides the
entry of a particular process.
 Critical Section: This part allows one process to enter and
modify the shared variable.
 Exit Section: Exit section allows the other process that
are waiting in the Entry Section, to enter into the Critical
Sections. It also checks that a process that finished its
execution should be removed through this Section.
 Remainder Section: All other parts of the Code, which is
not in Critical, Entry, and Exit Section, are known as the
Remainder Section.
Critical Section Problem
do {

entry section

critical section

exit section

remainder
section
} while (TRUE);

General structure of a typical process Pi 67


Solution to Critical-Section Problem
1. Mutual Exclusion - If process P is executing in its critical
i

section, then no other processes can be executing in their


critical sections

2. Progress –
 If no process is executing in its critical section
 and there exist some processes that wish to enter their
critical section
 then only the processes outside remainder section (i.e. the
processes competing for critical section, or exit section)
can participate in deciding which process will enter CS next
3. Bounded Waiting - A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has
made a request to enter its critical section and before that request is
granted
Assume that each process executes at a nonzero speed
No assumption concerning relative speed of the n processes
Critical Section Problem

Mutual exclusion using critical regions


Background

 Processes can execute concurrently


 May be interrupted at any time, partially completing execution
 Concurrent access to shared data may result in data inconsistency
 Maintaining data consistency requires mechanisms to ensure the
orderly execution of cooperating processes
 Illustration of the problem:
Suppose that we wanted to provide a solution to the consumer-
producer problem that fills all the buffers. We can do so by having an
integer counter that keeps track of the number of full buffers. Initially,
counter is set to 0. It is incremented by the producer after it produces
a new buffer and is decremented by the consumer after it consumes
a buffer.
Producer Consumer Problem
Producer-Consumer Problem
 Paradigm for cooperating processes,producer process
produces information that is consumed by a consumer
process

0 X out Consum
1 A er
Buffer empty=>
in=out 2 B
3 C
Buffer full=>
4 in Producer
(in+1)%size=out
5

• unbounded-buffer places no practical limit on the size of the


buffer
• bounded-buffer assumes that there is a fixed buffer size
Bounded-Buffer –
Shared-Memory Solution

 Shared data
#define BUFFER_SIZE 10
typedef struct {
...
} item;
item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;
Producer
Int count=0;
Void producer(void)
{ int next_produced;
while (true) {
/* produce an item in next produced */

while (counter == BUFFER_SIZE) ;


/* do nothing Buffer FUll*/
buffer[in] = next_produced;
in = (in + 1) % BUFFER_SIZE;
counter++;
}
}
Consumer

Void consumer(void)
{
int next_consumed;
while (true) {
while (counter == 0)
; /* do nothing Buffer Empty */
next_consumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
counter--;
/* consume the item in next consumed */
}
}
Mutual Exclusion

 Disable interrupt
 After entering critical region, disable all interrupts
 Since clock is just an interrupt, no CPU preemption can occur
 Disabling interrupt is useful for OS itself, but not for users…
Mutual Exclusion with busy waiting
 Lock variable
 A software solution
 A single, shared variable (lock)
 before entering critical region, programs test the variable,
 if 0, enter CS;
 if 1, the critical region is occupied

 What is the problem?


While(true)
{
while(lock!=0);
Lock=1
CS()
Lock=0
Non-CS()
}
Concepts

 Busy waiting
 Continuously testing a variable until some value appears

 Spin lock
 A lock using busy waiting is call a spin lock

 CPU time wastage!


Race Condition

 counter++ could be implemented as


register1 = counter
register1 = register1 + 1
counter = register1
 counter-- could be implemented as
register2 = counter
register2 = register2 - 1
counter = register2
 Consider this execution interleaving with “count = 5” initially:
S0: producer execute register1 = counter {register1 = 5}
S1: producer execute register1 = register1 + 1 {register1 = 6}
S2: consumer execute register2 = counter {register2 = 5}
S3: consumer execute register2 = register2 – 1 {register2 = 4}
S4: producer execute counter = register1 {counter = 6 }
S5: consumer execute counter = register2 {counter = 4}
Critical-Section Handling in OS

Two approaches depending on if kernel is


preemptive or non- preemptive
 Preemptive – allows preemption of process
when running in kernel mode
 Non-preemptive – runs until exits kernel mode,
blocks, or voluntarily yields CPU
Essentially free of race conditions in kernel mode
Peterson’s Solution

 Good algorithmic description of solving the problem


 Two process solution
 Assume that the load and store machine-language
instructions are atomic; that is, cannot be interrupted
 The two processes share two variables:
 int turn;
 Boolean flag[2]

 The variable turn indicates whose turn it is to enter the


critical section
 The flag array is used to indicate if a process is ready
to enter the critical section. flag[i] = true implies that
process Pi is ready!
Algorithm for Process Pi

do {
flag[i] = true;
turn = j;
while (flag[j] && turn = = j);
critical section
flag[i] = false;
remainder section
} while (true);
Peterson’s Solution (Cont.)

 Provable that the three CS requirement are


met:
1. Mutual exclusion is preserved
Pi enters CS only if:
either flag[j] = false or turn = i
2. Progress requirement is satisfied
3. Bounded-waiting requirement is met
Synchronization Hardware

 Many systems provide hardware support for


implementing the critical section code.
 All solutions below based on idea of locking
 Protecting critical regions via locks
 Uniprocessors – could disable interrupts
 Currently running code would execute without preemption
 Generally too inefficient on multiprocessor systems
Operating systems using this not broadly scalable
 Modern machines provide special atomic
hardware instructions
Atomic = non-interruptible
 Either test memory word and set value
 Or swap contents of two memory words
Solution to Critical-section Problem Using Locks

do {
acquire lock
critical section
release lock
remainder section
} while (TRUE);
test_and_set Instruction

Definition:
boolean test_and_set (boolean *target)
{
boolean rv = *target;
*target = TRUE;
return rv:
}
1. Executed atomically
2. Returns the original value of passed
parameter
3. Set the new value of passed parameter to
“TRUE”.
Solution using test_and_set()

 Shared Boolean variable lock, initialized


to FALSE
 Solution:
do {
while (test_and_set(&lock))
; /* do nothing */
/* critical section */
lock = false;
/* remainder section */

} while (true);
compare_and_swap Instruction

Definition:
int compare _and_swap(int *value, int expected, int new_value) {
int temp = *value;

if (*value == expected)
*value = new_value;
return temp;
}

1. Executed atomically
2. Returns the original value of passed parameter
“value”
3. Set the variable “value” the value of the passed
parameter “new_value” but only if “value”
==“expected”. That is, the swap takes place only
under this condition.
Solution using compare_and_swap

 Shared integer “lock” initialized to 0;


 Solution:
do {
while (compare_and_swap(&lock, 0, 1) != 0)
; /* do nothing */
/* critical section */
lock = 0;
/* remainder section */
} while (true);
Bounded-waiting Mutual Exclusion with test_and_set

do {
waiting[i] = true;
key = true;
while (waiting[i] && key)
key = test_and_set(&lock);
waiting[i] = false;
/* critical section */
j = (i + 1) % n;
while ((j != i) && !waiting[j])
j = (j + 1) % n;
if (j == i)
lock = false;
else
waiting[j] = false;
/* remainder section */
} while (true);
Mutex Locks

 Previous solutions are complicated and


generally inaccessible to application
programmers
 OS designers build software tools to solve
critical section problem
 Simplest is mutex lock
 Protect a critical section by first acquire() a lock
then release() the lock
 Boolean variable indicating if lock is available or not
 Calls to acquire() and release() must be atomic
 Usually implemented via hardware atomic instructions

 But this solution requires busy waiting


■ This lock therefore called a spinlock
acquire() and release()
 acquire() {
while (!available)
; /* busy wait */
available = false;
}
 release() {
available = true;
}
 do {
acquire lock
critical section
release lock
remainder section
} while (true);
Semaphore
 Synchronization tool that provides more sophisticated ways (than Mutex locks) for
process to synchronize their activities.
 Semaphore S – integer variable
 Can only be accessed via two indivisible (atomic) operations
 wait() and signal()
Originally called P() and V()

 Definition of the wait() operation


wait(S) {
while (S <= 0)
; // busy wait
S--;
}
 Definition of the signal() operation
signal(S) {
S++;
}
Semaphore Usage
 Counting semaphore – integer value can range over an unrestricted domain
 Binary semaphore – integer value can range only between 0 and 1
 Same as a mutex lock
 Can solve various synchronization problems
 Consider P1 and P2 that require S1 to happen before S2
Create a semaphore “synch” initialized to 0
P1:
S 1;
signal(synch);
P2:
wait(synch);
S 2;
 Can implement a counting semaphore S as a binary semaphore
Semaphore Implementation

 Must guarantee that no two processes can


execute the wait() and signal() on the same
semaphore at the same time
 Thus, the implementation becomes the
critical section problem where the wait and
signal code are placed in the critical section
 Could now have busy waiting in critical section
implementation
But implementation code is short
Little busy waiting if critical section rarely occupied
 Note that applications may spend lots of
time in critical sections and therefore this
is not a good solution
Semaphore Implementation with no Busy waiting

 With each semaphore there is an


associated waiting queue
 Each entry in a waiting queue has two
data items:
 value (of type integer)
 pointer to next record in the list
 Two operations:
 block – place the process invoking the operation
on the appropriate waiting queue
 wakeup – remove one of processes in the waiting
queue and place it in the ready queue
 typedef struct{
int value;
struct process *list;
} semaphore;
Implementation with no Busy waiting (Cont.)

wait(semaphore *S) {
S->value--;
if (S->value < 0) {
add this process to S->list;
block();
}
}

signal(semaphore *S) {
S->value++;
if (S->value <= 0) {
remove a process P from S->list;
wakeup(P);
}
}
Deadlock and Starvation
 Deadlock – two or more processes are waiting
indefinitely for an event that can be caused by
only one of the waiting processes
 Let S and Q be two semaphores initialized to 1
P0
P1
wait(S); wait(Q);
wait(Q); wait(S);
... ...
signal(S); signal(Q);
signal(Q); signal(S);

 Starvation – indefinite blocking


 A process may never be removed from the semaphore queue in which it is
suspended
 Priority Inversion – Scheduling problem when
lower-priority process holds a lock needed by
higher-priority process
Classical Problems of Synchronization

 Classical problems used to test newly-


proposed synchronization schemes
 Bounded-Buffer Problem
 Readers and Writers Problem
 Dining-Philosophers Problem
Bounded-Buffer Problem

 buffers, each can hold one item


n
 Semaphore mutex initialized to the value 1
 Semaphore full initialized to the value 0
 Semaphore empty initialized to the value n
Bounded Buffer Problem (Cont.)

 The structure of the producer process

do {
...
/* produce an item in next_produced */
...
wait(empty);
wait(mutex);
...
/* add next produced to the buffer */
...
signal(mutex);
signal(full);
} while (true);
Bounded Buffer Problem (Cont.)
 The structure of the consumer process

Do {
wait(full);
wait(mutex);
...
/* remove an item from buffer to next_consumed */
...
signal(mutex);
signal(empty);
...
/* consume the item in next consumed */
...
} while (true);
Readers-Writers Problem

 A dataset is shared among a number of concurrent


processes
 Readers – only read the data set; they do not perform any updates
 Writers – can both read and write
 Problem – allow multiple readers to read at the same time
 Only one single writer can access the shared data at the same time

 Several variations of how readers and writers are


considered – all involve some form of priorities
 Shared Data
 Data set
 Semaphore rw_mutex initialized to 1
 Semaphore mutex initialized to 1
 Integer read_count initialized to 0
Readers-Writers Problem (Cont.)

 The structure of a writer process

do {
wait(rw_mutex);
...
/* writing is performed */
...
signal(rw_mutex);
} while (true);
Readers-Writers Problem (Cont.)

 The structure of a reader process


do {
wait(mutex);
read_count++;
if (read_count == 1)
wait(rw_mutex);
signal(mutex);
...
/* reading is performed */
...
wait(mutex);
read count--;
if (read_count == 0)
signal(rw_mutex);
signal(mutex);
} while (true);
Dining-Philosophers Problem

 Philosophers spend their lives alternating thinking and eating


 Don’t interact with their neighbors, occasionally try to pick up 2 chopsticks (one at a time) to
eat from bowl
 Need both to eat, then release both when done

 In the case of 5 philosophers


 Shared data

Bowl of rice (data set)


Semaphore chopstick [5] initialized to 1
Dining-Philosophers Problem Algorithm

 The structure of Philosopher i:


do {
wait (chopstick[i] );
wait (chopStick[ (i + 1) % 5] );

// eat

signal (chopstick[i] );
signal (chopstick[ (i + 1) % 5] );

// think

} while (TRUE);
 What is the problem with this algorithm?
Dining-Philosophers Problem Algorithm

 The structure of Philosopher i:


do {
wait (chopstick[i] );
wait (chopStick[ (i + 1) % 5] );

// eat

signal (chopstick[i] );
signal (chopstick[ (i + 1) % 5] );

// think

} while (TRUE);
 What is the problem with this algorithm?
Dining-Philosophers Problem Algorithm (Cont.)

 Deadlock handling
 Allow at most 4 philosophers to be sitting simultaneously at the
table.
 Allow a philosopher to pick up the forks only if both are available
(picking must be done in a critical section.
 Use an asymmetric solution -- an odd-numbered philosopher
picks up first the left chopstick and then the right chopstick.
Even-numbered philosopher picks up first the right chopstick
and then the left chopstick.
QUESTION ON DINING PHILOSOPHER
PROBLEM
 A solution to the Dining Philosophers Problem which
avoids deadlock is
(a) Ensure that all philosophers pick up the left fork
before the right fork
(b) Ensure that all philosophers pick up the right fork
before the left fork
(c) Ensure that all philosophers pick up the left fork
before the right fork , and all other philosophers
pick up the right fork before the left fork
(d) None of the above
SOLUTION

 Use an asymmetric solution -- an odd-numbered


philosopher picks up first the left chopstick and
then the right chopstick. Even-numbered
philosopher picks up first the right chopstick and
then the left chopstick
 The correct answer is (c)
Summary
Process synchronization is the task of coordinating the
execution of processes in a way that no two processes
can have access to the same shared data and resources.
Four elements of critical section are 1) Entry section 2)
Critical section 3) Exit section 4) Reminder section
A critical section is a segment of code which can be
accessed by a signal process at a specific point of time.
Three must rules which must enforce by critical section
are : 1) Mutual Exclusion 2) Process solution 3)Bound
waiting
Mutual Exclusion is a special type of binary semaphore
which is used for controlling access to the shared
resource.
Process solution is used when no one is in the critical
Summary(Conti)
In bound waiting solution, after a process makes a
request for getting into its critical section, there is a
limit for how many other processes can get into their
critical section.
Peterson's solution is widely used solution to critical
section problems.
Problems of the Critical Section are also resolved by
synchronization of hardware
Synchronization hardware is not a simple method to
implement for everyone, so the strict software method
known as Mutex Locks was also introduced.
Semaphore is another algorithm or solution to the
critical section problem.

You might also like