0% found this document useful (0 votes)
22 views

Unit 1

Cs

Uploaded by

devileela921
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views

Unit 1

Cs

Uploaded by

devileela921
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 125

Operating System

• A program that acts as an intermediary


between a user of a computer and the
computer hardware.
• Operating system goals:
– Execute user programs and make solving user
problems easier.
– Make the computer system convenient to use.
• Use the computer hardware in an efficient
manner.
Computer System Components
1. Hardware – provides basic computing resources
(CPU, memory, I/O devices).
2. Operating system – controls and coordinates the
use of the hardware among the various application
programs for the various users.
3. Applications programs – define the ways in which
the system resources are used to solve the
computing problems of the users (compilers,
database systems, video games, business
programs).
4. Users (people, machines, other computers).
System Components
Definitions
• Resource allocator – manages and
allocates resources.
• Control program – controls the
execution of user programs and
operations of I/O devices .
• Kernel – the one program running at all
times (all else being application
programs).
Mainframe Systems
• Reduce setup time by batching similar
jobs
• Automatic job sequencing – automatically
transfers control from one job to another.
First rudimentary operating system.
• Resident monitor
– initial control in monitor
– control transfers to job
– when job completes control transfers pack to
monitor
Simple Batch System
Multiprogrammed Batch Systems
Several jobs are kept in main memory at the same time, and the
CPU is multiplexed among them.
• Memory management – the system
must allocate the memory to several
jobs.
• CPU scheduling – the system must
choose among several jobs ready to
run.
• Allocation of devices.
Time-Sharing Systems–Interactive Computing

• The CPU is multiplexed among several jobs that are


kept in memory and on disk (the CPU is allocated to
a job only if the job is in memory).
• A job swapped in and out of memory to the disk.
• On-line communication between the user and the
system is provided; when the operating system
finishes the execution of one command, it seeks the
next “control statement” from the user’s keyboard.
• On-line system must be available for users to access
data and code.
Desktop Systems
• Personal computers – computer system
dedicated to a single user.
• I/O devices – keyboards, mice, display
screens, small printers.
• User convenience and responsiveness.
• Can adopt technology developed for larger
operating system’ often individuals have sole
use of computer and do not need advanced
CPU utilization of protection features.
• May run several different types of operating
systems (Windows, MacOS, UNIX, Linux)
Parallel Systems
• Multiprocessor systems with more than one CPU
in close communication.
• Tightly coupled system – processors share
memory and a clock; communication usually
takes place through the shared memory.
• Advantages of parallel system:
– Increased throughput
– Economical
– Increased reliability
• graceful degradation
• fail-soft systems
Parallel Systems (Cont.)
• Symmetric multiprocessing (SMP)
– Each processor runs and identical copy of the
operating system.
– Many processes can run at once without
performance deterioration.
– Most modern operating systems support SMP
• Asymmetric multiprocessing
– Each processor is assigned a specific task; master
processor schedules and allocated work to slave
processors.
– More common in extremely large systems
Symmetric Multiprocessing Architecture
Distributed Systems
• Distribute the computation among several physical
processors.
• Loosely coupled system – each processor has its
own local memory; processors communicate with
one another through various communications
lines, such as high-speed buses or telephone lines.
• Advantages of distributed systems.
– Resources Sharing
– Computation speed up – load sharing
– Reliability
– Communications
Distributed Systems (cont)
• Requires networking infrastructure.
• Local area networks (LAN) or Wide
area networks (WAN)
• May be either client-server or peer-to-
peer systems.
General Structure of Client-Server
Clustered Systems
• Clustering allows two or more systems to
share storage.
• Provides high reliability.
• Asymmetric clustering: one server runs the
application while other servers standby.
• Symmetric clustering: all N hosts are running
the application.
Real-Time Systems
• Often used as a control device in a
dedicated application such as controlling
scientific experiments, medical imaging
systems, industrial control systems, and
some display systems.
• Well-defined fixed-time constraints.
• Real-Time systems may be either hard or
soft real-time.
Real-Time Systems
• Hard real-time:
– Secondary storage limited or absent, data stored in
short term memory, or read-only memory (ROM)
– Conflicts with time-sharing systems, not supported by
general-purpose operating systems.

• Soft real-time
– Limited utility in industrial control of robotics
– Useful in applications (multimedia, virtual reality)
requiring advanced operating-system features.
Handheld Systems
• Personal Digital Assistants (PDAs)
• Cellular telephones
• Issues:
– Limited memory
– Slow processors
– Small display screens.
Migration of Operating-System Concepts and Features
Computing Environments
• Traditional computing
• Web-Based Computing
• Embedded Computing
Operating-System Structures

• System Components
• Operating System Services
• System Calls
• System Programs
• System Structure
• Virtual Machines
• System Design and Implementation
• System Generation
Common System Components
• Process Management
• Main Memory Management
• File Management
• I/O System Management
• Secondary Management
• Networking
• Protection System
• Command-Interpreter System
Process Management
• A process is a program in execution. A
process needs certain resources, including
CPU time, memory, files, and I/O devices, to
accomplish its task.
• The operating system is responsible for the
following activities in connection with process
management.
– Process creation and deletion.
– process suspension and resumption.
– Provision of mechanisms for:
• process synchronization
• process communication
Main-Memory Management
• Memory is a large array of words or bytes, each with its
own address. It is a repository of quickly accessible data
shared by the CPU and I/O devices.
• Main memory is a volatile storage device. It loses its
contents in the case of system failure.
• The operating system is responsible for the following
activities in connections with memory management:
– Keep track of which parts of memory are currently being
used and by whom.
– Decide which processes to load when memory space
becomes available.
– Allocate and deallocate memory space as needed.
File Management

• A file is a collection of related information defined by


its creator. Commonly, files represent programs (both
source and object forms) and data.
• The operating system is responsible for the following
activities in connections with file management:
– File creation and deletion.
– Directory creation and deletion.
– Support of primitives for manipulating files and directories.
– Mapping files onto secondary storage.
– File backup on stable (nonvolatile) storage media.
I/O System Management

• The I/O system consists of:


– A buffer-caching system
– A general device-driver interface
– Drivers for specific hardware devices
Secondary-Storage Management
• Since main memory (primary storage) is volatile and too
small to accommodate all data and programs
permanently, the computer system must provide
secondary storage to back up main memory.
• Most modern computer systems use disks as the
principle on-line storage medium, for both programs and
data.
• The operating system is responsible for the following
activities in connection with disk management:
– Free space management
– Storage allocation
– Disk scheduling
Networking (Distributed Systems)
• A distributed system is a collection processors that do
not share memory or a clock. Each processor has its
own local memory.
• The processors in the system are connected through a
communication network.
• Communication takes place using a protocol.
• A distributed system provides user access to various
system resources.
• Access to a shared resource allows:
– Computation speed-up
– Increased data availability
– Enhanced reliability
Protection System
• Protection refers to a mechanism for
controlling access by programs, processes, or
users to both system and user resources.
• The protection mechanism must:
– distinguish between authorized and unauthorized
usage.
– specify the controls to be imposed.
– provide a means of enforcement.
Command-Interpreter System
• Many commands are given to the operating
system by control statements which deal with:
– process creation and management
– I/O handling
– secondary-storage management
– main-memory management
– file-system access
– protection
– networking
Command-Interpreter System
• The program that reads and interprets control
statements is called variously:

– command-line interpreter
– shell (in UNIX)

Its function is to get and execute the next


command statement.
Operating System Services
• Program execution – system capability to load a program
into memory and to run it.
• I/O operations – since user programs cannot execute I/O
operations directly, the operating system must provide some
means to perform I/O.
• File-system manipulation – program capability to read,
write, create, and delete files.
• Communications – exchange of information between
processes executing either on the same computer or on
different systems tied together by a network. Implemented
via shared memory or message passing.
• Error detection – ensure correct computing by detecting
errors in the CPU and memory hardware, in I/O devices, or
in user programs.
Additional Operating System Functions
Additional functions exist not for helping the
user, but rather for ensuring efficient system
operations.
• Resource allocation – allocating resources to
multiple users or multiple jobs running at the same
time.
• Accounting – keep track of and record which users
use how much and what kinds of computer
resources for account billing or for accumulating
usage statistics.
• Protection – ensuring that all access to system
resources is controlled.
System Calls
• System calls provide the interface between a running program
and the operating system.
– Generally available as assembly-language instructions.
– Languages defined to replace assembly language for
systems programming allow system calls to be made
directly (e.g., C, C++)
• Three general methods are used to pass parameters between a
running program and the operating system.
– Pass parameters in registers.
– Store the parameters in a table in memory, and the table
address is passed as a parameter in a register.
– Push (store) the parameters onto the stack by the program,
and pop off the stack by operating system.
Passing of Parameters As A Table
Types of System Calls
• Process control
• File management
• Device management
• Information maintenance
• Communications
MS-DOS Execution

At System Start-up Running a Program


UNIX Running Multiple Programs
Communication Models
 Communication may take place using either message passing or shared
memory.

Msg Passing Shared Memory


System Programs
• System programs provide a convenient environment for
program development and execution. The can be divided into:
– File manipulation
– Status information
– File modification
– Programming language support
– Program loading and execution
– Communications
– Application programs
• Most users’ view of the operation system is defined by system
programs, not the actual system calls.
Layered Approach
• The operating system is divided into a number
of layers (levels), each built on top of lower
layers. The bottom layer (layer 0), is the
hardware; the highest (layer N) is the user
interface.
• With modularity, layers are selected such that
each uses functions (operations) and services
of only lower-level layers.
An Operating System Layer
OS/2 Layer Structure
Microkernel System Structure
• Moves as much from the kernel into “user” space.
• Communication takes place between user modules
using message passing.
• Benefits:
- easier to extend a microkernel
- easier to port the operating system to new
architectures
- more reliable (less code is running in kernel mode)
- more secure
Windows NT Client-Server
Structure
Virtual Machines
• A virtual machine takes the layered approach
to its logical conclusion. It treats hardware
and the operating system kernel as though they
were all hardware.
• A virtual machine provides an interface
identical to the underlying bare hardware.
• The operating system creates the illusion of
multiple processes, each executing on its own
processor with its own (virtual) memory.
Virtual Machines
• The resources of the physical computer are
shared to create the virtual machines.
– CPU scheduling can create the appearance that
users have their own processor.
– Spooling and a file system can provide virtual card
readers and virtual line printers.
– A normal user time-sharing terminal serves as the
virtual machine operator’s console.
System Models

Non-virtual Machine Virtual Machine


Advantages/Disadvantages of Virtual Machines

• The virtual-machine concept provides complete protection of


system resources since each virtual machine is isolated from all
other virtual machines. This isolation, however, permits no direct
sharing of resources.
• A virtual-machine system is a perfect vehicle for operating-
systems research and development. System development is done
on the virtual machine, instead of on a physical machine and so
does not disrupt normal system operation.
• The virtual machine concept is difficult to implement due to the
effort required to provide an exact duplicate to the underlying
machine.
Java Virtual Machine
• Compiled Java programs are platform-neutral
bytecodes executed by a Java Virtual Machine
(JVM).
• JVM consists of
- class loader
- class verifier
- runtime interpreter
• Just-In-Time (JIT) compilers increase
performance
Java Virtual Machine
System Design Goals
• User goals – operating system should be
convenient to use, easy to learn, reliable, safe,
and fast.
• System goals – operating system should be
easy to design, implement, and maintain, as
well as flexible, reliable, error-free, and
efficient.
Mechanisms and Policies
• Mechanisms determine how to do something,
policies decide what will be done.
• The separation of policy from mechanism is a
very important principle, it allows maximum
flexibility if policy decisions are to be changed
later.
System Implementation
• Traditionally written in assembly language,
operating systems can now be written in
higher-level languages.
• Code written in a high-level language:
– can be written faster.
– is more compact.
– is easier to understand and debug.
• An operating system is far easier to port (move
to some other hardware) if it is written in a
high-level language.
System Generation (SYSGEN)
• Operating systems are designed to run on any of a
class of machines; the system must be configured for
each specific computer site.
• SYSGEN program obtains information concerning
the specific configuration of the hardware system.
• Booting – starting a computer by loading the kernel.
• Bootstrap program – code stored in ROM that is able
to locate the kernel, load it into memory, and start its
execution.
Process Concept
• An operating system executes a variety of programs:
– Batch system – jobs
– Time-shared systems – user programs or tasks
• Textbook uses the terms job and process almost
interchangeably.
• Process – a program in execution; process execution must
progress in sequential fashion.
• A process includes:
– program counter
– stack
– data section
Process State
• As a process executes, it changes state
– new: The process is being created.
– running: Instructions are being executed.
– waiting: The process is waiting for some
event to occur.
– ready: The process is waiting to be assigned
to a process.
– terminated: The process has finished
execution.
Diagram of Process State
Process Control Block (PCB)
Information associated with each process.
• Process state
• Program counter
• CPU registers
• CPU scheduling information
• Memory-management information
• Accounting information
• I/O status information
Process Control Block (PCB)
CPU Switch From Process to Process
Process Scheduling Queues
• Job queue – set of all processes in the system.
• Ready queue – set of all processes residing in
main memory, ready and waiting to execute.
• Device queues – set of processes waiting for
an I/O device.
• Process migration between the various
queues.
Ready Queue And Various I/O Device Queues
Representation of Process Scheduling
Schedulers

• Long-term scheduler (or job scheduler) –


selects which processes should be brought
into the ready queue.
• Short-term scheduler (or CPU scheduler) –
selects which process should be executed
next and allocates CPU.
Addition of Medium Term Scheduling
Schedulers
• Short-term scheduler is invoked very frequently
(milliseconds)  (must be fast).
• Long-term scheduler is invoked very infrequently
(seconds, minutes)  (may be slow).
• The long-term scheduler controls the degree of
multiprogramming.
• Processes can be described as either:
– I/O-bound process – spends more time doing I/O than
computations, many short CPU bursts.
– CPU-bound process – spends more time doing
computations; few very long CPU bursts.
Context Switch
• When CPU switches to another process,
the system must save the state of the old
process and load the saved state for the
new process.
• Context-switch time is overhead; the
system does no useful work while
switching.
• Time dependent on hardware support.
Process Creation
• Parent process create children processes, which,
in turn create other processes, forming a tree of
processes.
• Resource sharing
– Parent and children share all resources.
– Children share subset of parent’s resources.
– Parent and child share no resources.
• Execution
– Parent and children execute concurrently.
– Parent waits until children terminate.
Process Creation
• Address space
– Child duplicate of parent.
– Child has a program loaded into it.
• UNIX examples
– fork system call creates new process
– exec system call used after a fork to replace the
process’ memory space with a new program.
Processes Tree on a UNIX System
Process Termination
• Process executes last statement and asks the operating
system to decide it (exit).
– Output data from child to parent (via wait).
– Process’ resources are deallocated by operating system.
• Parent may terminate execution of children processes
(abort).
– Child has exceeded allocated resources.
– Task assigned to child is no longer required.
– Parent is exiting.
• Operating system does not allow child to continue if its parent
terminates.
• Cascading termination.
Cooperating Processes
• Independent process cannot affect or be
affected by the execution of another process.
• Cooperating process can affect or be affected
by the execution of another process
• Advantages of process cooperation
– Information sharing
– Computation speed-up
– Modularity
– Convenience
Producer-Consumer Problem
• Paradigm for cooperating processes, producer
process produces information that is consumed
by a consumer process.
– unbounded-buffer places no practical limit on the
size of the buffer.
– bounded-buffer assumes that there is a fixed buffer
size.
Bounded-Buffer – Producer Process

void producer(void)
{
int item;
while (TRUE) {
produce item(&item);
produce_if (count == N) sleep();
enter_item(item);
count = count + 1;
if (count == 1) wakeup(consumer);
}
}
Bounded-Buffer – Consumer Process

void consumer(void) {
int item;
while (TRUE) {
if (count == 0) sleep();
remove_item(&item);
count = count - 1;
if (count == N-1) wakeup(producer);
consume_item(item);
}
}
Interprocess Communication (IPC)
• Mechanism for processes to communicate and to
synchronize their actions.
• Message system – processes communicate with each
other without resorting to shared variables.
• IPC facility provides two operations:
– send(message) – message size fixed or variable
– receive(message)
• If P and Q wish to communicate, they need to:
– establish a communication link between them
– exchange messages via send/receive
• Implementation of communication link
– physical (e.g., shared memory, hardware bus)
– logical (e.g., logical properties)
Direct Communication
• Processes must name each other explicitly:
– send (P, message) – send a message to process P
– receive(Q, message) – receive a message from process
Q
• Properties of communication link
– Links are established automatically.
– A link is associated with exactly one pair of
communicating processes.
– Between each pair there exists exactly one link.
– The link may be unidirectional, but is usually bi-
directional.
Indirect Communication
• Messages are directed and received from
mailboxes (also referred to as ports).
– Each mailbox has a unique id.
– Processes can communicate only if they share a
mailbox.
• Properties of communication link
– Link established only if processes share a common
mailbox
– A link may be associated with many processes.
– Each pair of processes may share several
communication links.
– Link may be unidirectional or bi-directional.
Indirect Communication
• Operations
– create a new mailbox
– send and receive messages through mailbox
– destroy a mailbox
• Primitives are defined as:
send(A, message) – send a message to
mailbox A
receive(A, message) – receive a message
from mailbox A
Indirect Communication
• Mailbox sharing
– P1, P2, and P3 share mailbox A.
– P1, sends; P2 and P3 receive.
– Who gets the message?
• Solutions
– Allow a link to be associated with at most two
processes.
– Allow only one process at a time to execute a receive
operation.
– Allow the system to select arbitrarily the receiver.
Sender is notified who the receiver was.
Synchronization
• Message passing may be either blocking or
non-blocking.
• Blocking is considered synchronous
• Non-blocking is considered asynchronous
• send and receive primitives may be either
blocking or non-blocking.
Buffering
• Queue of messages attached to the link;
implemented in one of three ways.
1.Zero capacity – 0 messages
Sender must wait for receiver (rendezvous).
2.Bounded capacity – finite length of n messages
Sender must wait if link full.
3.Unbounded capacity – infinite length
Sender never waits.
Client-Server Communication
• Sockets
• Remote Procedure Calls
• Remote Method Invocation (Java)
Sockets
• A socket is defined as an endpoint for
communication.
• Concatenation of IP address and port
• The socket 161.25.19.8:1625 refers to port
1625 on host 161.25.19.8
• Communication consists between a pair of
sockets.
Socket Communication
Remote Procedure Calls
• Remote procedure call (RPC) abstracts procedure
calls between processes on networked systems.
• Stubs – client-side proxy for the actual procedure
on the server.
• The client-side stub locates the server and
marshalls the parameters.
• The server-side stub receives this message,
unpacks the marshalled parameters, and peforms
the procedure on the server.
Execution of RPC
Remote Method Invocation
• Remote Method Invocation (RMI) is a Java
mechanism similar to RPCs.
• RMI allows a Java program on one machine to
invoke a method on a remote object.
Marshalling Parameters
Race Condition
• Race condition: The situation where several
processes access – and manipulate shared data
concurrently. The final value of the shared data
depends upon which process finishes last.

• To prevent race conditions, concurrent


processes must be synchronized.
The Critical-Section Problem
• n processes all competing to use some
shared data
• Each process has a code segment, called
critical section, in which the shared data is
accessed.
• Problem – ensure that when one process is
executing in its critical section, no other
process is allowed to execute in its critical
section.
Solution to Critical-Section Problem
1. Mutual Exclusion. If process Pi is executing in its critical
section, then no other processes can be executing in their
critical sections.
2. Progress. If no process is executing in its critical section
and there exist some processes that wish to enter their
critical section, then the selection of the processes that will
enter the critical section next cannot be postponed
indefinitely.
3. Bounded Waiting. A bound must exist on the number of
times that other processes are allowed to enter their critical
sections after a process has made a request to enter its
critical section and before that request is granted.
 Assume that each process executes at a nonzero speed
 No assumption concerning relative speed of the n processes.
Initial Attempts to Solve Problem
• Only 2 processes, P0 and P1
• General structure of process Pi (other process Pj)
do {
entry section
critical section
exit section
reminder section
} while (1);
• Processes may share some common variables to
synchronize their actions.
Algorithm 1
• Shared variables:
– int turn;
initially turn = 0
– turn - i  Pi can enter its critical section
• Process Pi
do {
while (turn != i) ;
critical section
turn = j;
reminder section
} while (1);
• Satisfies mutual exclusion, but not progress
Algorithm 2
• Shared variables
– boolean flag[2];
initially flag [0] = flag [1] = false.
– flag [i] = true  Pi ready to enter its critical section
• Process Pi
do {
flag[i] := true;
while (flag[j]) ;
critical section
flag [i] = false;
remainder section
} while (1);
• Satisfies mutual exclusion, but not progress requirement.
Semaphores
• Synchronization tool that does not require busy waiting.
• Semaphore S – integer variable
• can only be accessed via two indivisible (atomic)
operations
wait (S):
while S 0 do no-op;
S--;

signal (S):
S++;
Critical Section of n Processes
• Shared data:
semaphore mutex; //initially mutex = 1

• Process Pi:

do {
wait(mutex);
critical section
signal(mutex);
remainder section
} while (1);
Semaphore Implementation
• Define a semaphore as a record
typedef struct {
int value;
struct process *L;
} semaphore;

• Assume two simple operations:


– block suspends the process that invokes it.
– wakeup(P) resumes the execution of a blocked
process P.
Implementation
• Semaphore operations now defined as
wait(S):
S.value--;
if (S.value < 0) {
add this process to S.L;
block;
}

signal(S):
S.value++;
if (S.value <= 0) {
remove a process P from S.L;
wakeup(P);
}
Deadlock and Starvation
• Deadlock – two or more processes are waiting indefinitely for
an event that can be caused by only one of the waiting processes.
• Let S and Q be two semaphores initialized to 1
P0 P1
wait(S); wait(Q);
wait(Q); wait(S);
 
signal(S); signal(Q);
signal(Q) signal(S);
• Starvation – indefinite blocking. A process may never be
removed from the semaphore queue in which it is suspended.
Two Types of Semaphores
• Counting semaphore – integer value can
range over an unrestricted domain.
• Binary semaphore – integer value can range
only between 0 and 1; can be simpler to
implement.
• Can implement a counting semaphore S as a
binary semaphore.
Implementing S as a Binary Semaphore
• Data structures:
binary-semaphore S1, S2;
int C:
• Initialization:
S1 = 1
S2 = 0
C = initial value of semaphore S
Implementing S
• wait operation
wait(S1);
C--;
if (C < 0) {
signal(S1);
wait(S2);
}
signal(S1);

• signal operation
wait(S1);
C ++;
if (C <= 0)
signal(S2);
else
signal(S1);
Classical Problems of
Synchronization

• Bounded-Buffer Problem

• Readers and Writers Problem

• Dining-Philosophers Problem
Bounded-Buffer Problem
• Shared data

semaphore full, empty, mutex;

Initially:

full = 0, empty = n, mutex = 1


Bounded-Buffer Problem Producer Process

do {

produce an item in nextp

wait(empty);
wait(mutex);

add nextp to buffer

signal(mutex);
signal(full);
} while (1);
Bounded-Buffer Problem Consumer Process

do {
wait(full)
wait(mutex);

remove an item from buffer to nextc

signal(mutex);
signal(empty);

consume the item in nextc

} while (1);
Readers-Writers Problem
• Shared data

semaphore mutex, wrt;

Initially

mutex = 1, wrt = 1, readcount = 0


Readers-Writers Problem Writer Process

wait(wrt);

writing is performed

signal(wrt);
Readers-Writers Problem Reader Process

wait(mutex);
readcount++;
if (readcount == 1)
wait(wrt);
signal(mutex);

reading is performed

wait(mutex);
readcount--;
if (readcount == 0)
signal(wrt);
signal(mutex):
Dining-Philosophers Problem

• Shared data
semaphore chopstick[5];
Initially all values are 1
Dining-Philosophers Problem
• Philosopher i:
do {
wait(chopstick[i])
wait(chopstick[(i+1) % 5])

eat

signal(chopstick[i]);
signal(chopstick[(i+1) % 5]);

think

} while (1);
Example – Bounded Buffer
• Shared data:

struct buffer {
int pool[n];
int count, in, out;
}
Bounded Buffer Producer Process
• Producer process inserts nextp into the shared
buffer

region buffer when( count < n) {


pool[in] = nextp;
in:= (in+1) % n;
count++;
}
Bounded Buffer Consumer Process
• Consumer process removes an item from the
shared buffer and puts it in nextc

region buffer when (count > 0) {


nextc = pool[out];
out = (out+1) % n;
count--;
}
Monitors
• High-level synchronization construct that allows the safe sharing of an abstract
data type among concurrent processes.

monitor monitor-name
{
shared variable declarations
procedure body P1 (…) {
...
}
procedure body P2 (…) {
...
}
procedure body Pn (…) {
...
}
{
initialization code
}
}
Monitors
• To allow a process to wait within the monitor,
a condition variable must be declared, as
condition x, y;
• Condition variable can only be used with the
operations wait and signal.
– The operation
x.wait();
means that the process invoking this operation is
suspended until another process invokes
x.signal();
– The x.signal operation resumes exactly one
suspended process. If no process is suspended,
then the signal operation has no effect.
Schematic View of a Monitor
Monitor With Condition Variables
Dining Philosophers Example
monitor dp
{
enum {thinking, hungry, eating} state[5];
condition self[5];
void pickup(int i) // following slides
void putdown(int i) // following slides
void test(int i) // following slides
void init() {
for (int i = 0; i < 5; i++)
state[i] = thinking;
}
}
Dining Philosophers
void pickup(int i) {
state[i] = hungry;
test[i];
if (state[i] != eating)
self[i].wait();
}

void putdown(int i) {
state[i] = thinking;
// test left and right neighbors
test((i+4) % 5);
test((i+1) % 5);
}
Dining Philosophers
void test(int i) {
if ( (state[(I + 4) % 5] != eating) &&
(state[i] == hungry) &&
(state[(i + 1) % 5] != eating)) {
state[i] = eating;
self[i].signal();
}
}

You might also like