Course contentOS
Course contentOS
Introduction to OS
A program that acts as an intermediary between a user of a computer and the
computer hardware
o Operating system
o Application programs – define the ways in which the system resources are
used to solve the computing problems of the users
o Users
OS is a resource allocator
o Decides between conflicting requests for efficient and fair resource use
OS is a control program
Computer Startup
One or more CPUs, device controllers connect through common bus providing
access to shared memory
Device controller informs CPU that it has finished its operation by causing an
interrupt
Interrupt transfers control to the interrupt service routine generally, through the
interrupt vector, which contains the addresses of all the service routines
The operating system preserves the state of the CPU by storing registers and the
program counter
polling
Separate segments of code determine what action should be taken for each type
of interrupt
I/O Structure
After I/O starts, control returns to user program only upon I/O completion
After I/O starts, control returns to user program without waiting for I/O completion
o System call – request to the operating system to allow user to wait for I/O
completion
o Device-status table contains entry for each I/O device indicating its type,
address, and state
o Operating system indexes into I/O device table to determine device status
and to modify table entry to include interrupt
Storage Structure
Main memory – only large storage media that the CPU can access directly
Device controller transfers blocks of data from buffer storage directly to main
memory without CPU intervention
Only one interrupt is generated per block, rather than the one interrupt per
byte
Storage Hierarchy
o Speed
o Cost
o Volatility
Caching
o Disk surface is logically divided into tracks, which are subdivided into
sectors
o The disk controller determines the logical interaction between the device
and the computer
o Advantages include
Increased throughput
Economy of scale
o Two types
Asymmetric Multiprocessing
Symmetric Multiprocessing
o Single user cannot keep CPU and I/O devices busy at all times
o Multiprogramming organizes jobs (code and data) so CPU always has one
to execute
o A subset of total jobs in system is kept in memory
o When it has to wait (for I/O for example), OS switches to another job
o If processes don’t fit in memory, swapping moves them in and out to run
Other process problems include infinite loop, processes modifying each other or
the operating system
OS Services
One set of operating-system services provides functions that are helpful to the user:
o User interface - Almost all operating systems have a user interface (UI)
o Program execution - The system must be able to load a program into memory
and to run that program, end execution, either normally or abnormally
(indicating error)
o I/O operations - A running program may require I/O, which may involve a file
or an I/O device
One set of operating-system services provides functions that are helpful to the user
(Cont):
May occur in the CPU and memory hardware, in I/O devices, in user
program
For each type of error, OS should take the appropriate action to ensure
correct and consistent computing
Another set of OS functions exists for ensuring the efficient operation of the system
itself via resource sharing
o Accounting - To keep track of which users use how much and what kinds of
computer resources
Three most common APIs are Win32 API for Windows, POSIX API for POSIX-
based systems (including virtually all versions of UNIX, Linux, and Mac OS X),
and Java API for the Java virtual machine (JVM)
Example
System call sequence to copy the contents of one file to another file
Typically, a number associated with each system call
The system call interface invokes intended system call in OS kernel and returns
status of the system call and any return values
The caller need know nothing about how the system call is implemented
o Just needs to obey API and understand what OS will do as a result call
Process control
File management
Device management
Information maintenance
Communications
Protection
OS Structure
Layered Approach
The operating system is divided into a number of layers (levels), each built on top
of lower layers. The bottom layer (layer 0), is the hardware; the highest (layer N)
is the user interface.
With modularity, layers are selected such that each uses functions (operations)
and services of only lower-level layers
Benefits:
o More secure
Detriments:
Virtual Machne
A virtual machine takes the layered approach to its logical conclusion. It treats
hardware and the operating system kernel as though they were all hardware
Process Management
An operating system executes a variety of programs:
A process includes:
o program counter
o stack
o data section
Process State
Process state
Program counter
CPU registers
Memory-management information
Accounting information
Context Switching
When CPU switches to another process, the system must save the state of the
old process and load the saved state for the new process via a context switch
Context-switch time is overhead; the system does no useful work while switching
Ready queue – set of all processes residing in main memory, ready and waiting
to execute
Schedulers
Process Creation
Resource sharing
Execution
Address space
UNIX examples
Process Termination
Process executes last statement and asks the operating system to delete it (exit)
o If parent is exiting
o Information sharing
o Computation speedup
o Modularity
o Convenience
o Shared memory
o Message passing
Cooperating Process
o Information sharing
o Computation speed-up
o Modularity
o Convenience
while (true) {
/* Produce an item */
buffer[in] = item;
while (true) {
item = buffer[out];
Fig: Consumer Process
out = (out + 1) % BUFFER SIZE;
IPC-Message Passing
o receive(message)
Direct Communication
Indirect Communication
Messages are directed and received from mailboxes (also referred to as ports)
o Operations
destroy a mailbox
o Allow the system to select arbitrarily the receiver. Sender is notified who
the receiver was.
Synchronisation
o Blocking send has the sender block until the message is received
o Non-blocking send has the sender send the message and continue
Buffering
Thread
A thread is a flow of execution through the process code, with its own
program counter, system registers and stack.
Benefits
Responsiveness
Resource Sharing
Economy
Scalability
User Threads
o POSIX Pthreads
o Win32 threads
o Java threads
Kernel Thread
Examples
o Windows XP/2000
o Solaris
o Linux
o Tru64 UNIX
o Mac OS X
Multithreading Models
Many-to-One
o Examples:
o Examples
o Windows NT/XP/2000
o Linux
Many-to-Many
Thread library provides programmer with API for creating and managing threads
Pthreads
A POSIX standard (IEEE 1003.1c) API for thread creation and synchronization
Java Threads
o Asynchronous or deferred
Signal handling
Thread pools
Thread-specific data
Scheduler activations
Thread Cancellation
Thread Pools
Advantages:
Thread Scheduling
Process switching needs interaction with Thread switching does not need to
operating system. interact with operating system.
If one process is blocked then no other While one thread is blocked and
process can execute until the first process is waiting, second thread in the same
unblocked. task can run.
Multiple processes without using threads use Multiple threaded processes use
more resources. fewer resources.
In multiple processes each process operates One thread can read, write or change
independently of the others. another thread's data.
Process Scheduling
Maximum CPU utilization obtained with multiprogramming
CPU Scheduler
Selects from among the processes in memory that are ready to execute, and
allocates the CPU to one of them
4. Terminates
Dispatcher
Dispatcher module gives control of the CPU to the process selected by the short-
term scheduler; this involves:
o switching context
o jumping to the proper location in the user program to restart that program
Dispatch latency – time it takes for the dispatcher to stop one process and start
another running
Max throughput
Associate with each process the length of its next CPU burst. Use these
lengths to schedule the process with the shortest time
SJF is optimal – gives minimum average waiting time for a given set of
processes
Priority Scheduling
o Preemptive
o nonpreemptive
SJF is a priority scheduling where priority is the predicted next CPU burst
time
Each process gets a small unit of CPU time (time quantum), usually 10-100
milliseconds. After this time has elapsed, the process is preempted and
added to the end of the ready queue.
If there are n processes in the ready queue and the time quantum is q, then
each process gets 1/n of the CPU time in chunks of at most q time units at
once. No process waits more than (n-1)q time units.
Performance
o q large FIFO
o foreground – RR
o background – FCFS
Scheduling must be done between the queues
o Fixed priority scheduling; (i.e., serve all from foreground then from
background). Possibility of starvation.
o Time slice – each queue gets a certain amount of CPU time which it
can schedule amongst its processes; i.e., 80% to foreground in RR
A process can move between the various queues; aging can be implemented
this way
o number of queues
o method used to determine which queue a process will enter when that
process needs service
MODULE-II
Process Synchronization
Concurrent access to shared data may result in data inconsistency
count++;
Race Condition /* consume the item in
nextConsumed
A situation like this, where several p rocesses} access and manipulate the same
data concurrently and the outcome of the execution depends on the particular
order in which the access takes place, is called a race condition.
count++ could be implemented as
register1 = count
register1 = register1 + 1
count = register1
register2 = count
register2 = register2 - 1
count = register2
2. Progress - If no process is executing in its critical section and there exist some
processes that wish to enter their critical section, then the selection of the
processes that will enter the critical section next cannot be postponed indefinitely
3. Bounded Waiting - A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has made a
request to enter its critical section and before that request is granted
Assume that the LOAD and STORE instructions are atomic; that is, cannot be
interrupted.
o int turn;
o Boolean flag[2]
The variable turn indicates whose turn it is to enter the critical section.
The flag array is used to indicate if a process is ready to enter the critical section.
flag[i] = true implies that process Pi is ready!
Hardware Synchronization
Atomic = non-interruptable
o Either test memory word and set value
do {
acquire lock
critical section
release lock
remainder section
} while (TRUE);
TestAndndSet Instruction
boolean rv = *target;
*target = TRUE;
return rv:
Solution:
do {
; // do nothing
// critical section
lock = FALSE;
// remainder section
} while (TRUE);
Sawp Instruction
*a = *b;
*b = temp:
Shared Boolean variable lock initialized to FALSE; Each process has a local
Boolean variable key
Solution:
do {
key = TRUE;
// critical section
lock = FALSE;
// remainder section
} while (TRUE);
do {
waiting[i] = TRUE;
key = TRUE;
waiting[i] = FALSE;
// critical section
j = (i + 1) % n;
j = (j + 1) % n;
if (j == i)
lock = FALSE;
else
waiting[j] = FALSE;
// remainder section
} while (TRUE);
Semaphore
Less complicated
; // no-op }
S--;
do {
wait (mutex);
// Critical Section
signal (mutex);
// remainder section
} while (TRUE);
Semaphore Implementation
Must guarantee that no two processes can execute wait () and signal () on the
same semaphore at the same time
Thus, implementation becomes the critical section problem where the wait and
signal code are placed in the crtical section.
Note that applications may spend lots of time in critical sections and therefore
this is not a good solution.
Two operations:
o block – place the process invoking the operation on the appropriate
waiting queue.
Implementation of wait:
wait(semaphore *S) {
S->value--;
if (S->value < 0) {
block();
Implementation of signal:
signal(semaphore *S) {
S->value++;
if (S->value <= 0) {
wakeup(P);
Bounded-Buffer Problem
Dining-Philosophers Problem
Bounded-Buffer Problem
The pool consists of n buffers, each capable of holding one item. The mutex semaphore
provides mutual exclusion for accesses to the buffer pool and is initialized to the value
1. The empty and full semaphores count the number of empty and full buffers. The
semaphore empty is initialized to the value n; the semaphore full is initialized to the
value 0.
N buffers, each can hold one item
do {
wait (empty);
wait (mutex);
signal (mutex);
signal (full);
} while (TRUE);
do {
wait (full);
wait (mutex);
signal (mutex);
signal (empty);
// consume the item in nextc
} while (TRUE);
Readers-Writers Problem
o Readers – only read the data set; they do not perform any updates
Problem – allow multiple readers to read at the same time. Only one single
writer can access the shared data at the same time
Shared Data
o Data set
do {
wait (wrt) ;
// writing is performed
signal (wrt) ;
} while (TRUE);
do {
wait (mutex) ;
readcount ++ ;
if (readcount == 1)
wait (wrt) ;
signal (mutex)
// reading is performed
wait (mutex) ;
readcount - - ;
if (readcount == 0)
signal (wrt) ;
signal (mutex) ;
} while (TRUE);
Dining-Philosophers Problem
Consider five philosophers who spend their lives thinking and eating. The philosophers
share a circular table surrounded by five chairs, each belonging to one philosopher. In
the center of the table is a bowl of rice, and the table is laid with five single chopsticks).
When a philosopher thinks, she does not interact with her colleagues. From time to
time, a philosopher gets hungry and tries to pick up the two chopsticks that are closest
to her (the chopsticks that are between her and her left and right neighbors). A
philosopher may pick up only one chopstick at a time. Obviously, she cannot pick up a
chopstick that is already in the hand of a neighbor. When a hungry philosopher has both
her chopsticks at the same time, she eats without releasing the chopsticks. When she is
finished eating, she puts down both chopsticks and starts thinking again.
Shared data
o Bowl of rice (data set)
o Semaphore chopstick [5] initialized to 1
The structure of Philosopher i:
do {
wait ( chopstick[i] );
wait ( chopStick[ (i + 1) % 5] );
// eat
signal ( chopstick[i] );
signal (chopstick[ (i + 1) % 5] );
// think
} while (TRUE);
Monitors
A high-level abstraction that provides a convenient and effective mechanism for
process synchronization
Only one process may be active within the monitor at a time
monitor monitor-name
{
// shared variable declarations
procedure P1 (…) { …. }
…
procedure Pn (…) {……}
Initialization code ( ….) { … }
…
}
}
Schematic view of a Monitor
Condition Variables
condition x, y;
Two operations on a condition variable:
o x.wait () – a process that invokes the operation is suspended.
x.signal () – resumes one of processes (if any) that invoked x.wait ()
Monitor with Condition Variables
Monitor Implementation Using Semaphores
Variables
semaphore mutex; // (initially = 1)
semaphore next; // (initially = 0)
int next-count = 0;
Each procedure F will be replaced by
wait(mutex);
…
body of F;
…
if (next_count > 0)
signal(next)
else
signal(mutex);
Mutual exclusion within a monitor is ensured.
Monitor Implementation
For each condition variable x, we have:
semaphore x_sem; // (initially = 0)
int x-count = 0;
The operation x.wait can be implemented as:
x-count++;
if (next_count > 0)
signal(next);
else
signal(mutex);
wait(x_sem);
x-count--;
The operation x.signal can be implemented as:
if (x-count > 0) {
next_count++;
signal(x_sem);
wait(next);
next_count--;
}
Deadlock
A set of blocked processes each holding a resource and waiting to acquire a
resource held by another process in the set
Example
o P1 and P2 each hold one disk drive and each needs another one
Example
P0 P1
System Model
Resource types R1, R2, . . ., Rm (CPU cycles, memory space, I/O devices)
o request
o use
o release
Deadlock Characterization
Hold and wait: a process holding at least one resource is waiting to acquire
additional resources held by other processes
Circular wait: there exists a set {P0, P1, …, P0} of waiting processes such that
P0 is waiting for a resource that is held by P1, P1 is waiting for a resource that is
held by P2, …, Pn–1 is waiting for a resource that is held by Pn, and P0 is waiting
for a resource that is held by P0.
o P = {P1, P2, …, Pn}, the set consisting of all the processes in the system
o R = {R1, R2, …, Rm}, the set consisting of all resource types in the system
Process
Pi requests instance of Rj
Pi is holding an instance of Rj
Ignore the problem and pretend that deadlocks never occur in the system; used
by most operating systems, including UNIX
Deadlock Prevention
Mutual Exclusion – not required for sharable resources; must hold for
nonsharable resources
Hold and Wait – must guarantee that whenever a process requests a resource,
it does not hold any other resources
No Preemption –
o Preempted resources are added to the list of resources for which the
process is waiting
o Process will be restarted only when it can regain its old resources, as well
as the new ones that it is requesting
Circular Wait – impose a total ordering of all resource types, and require that
each process requests resources in an increasing order of enumeration
Deadlock Avoidance
Requires that the system has some additional a priori information available
Simplest and most useful model requires that each process declare the
maximum number of resources of each type that it may need
Safe state
System is in safe state if there exists a sequence <P1, P2, …, Pn> of ALL the
processes is the systems such that for each Pi, the resources that Pi can still
request can be satisfied by currently available resources + resources held by all
the Pj, with j < i
That is:
o If Pi resource needs are not immediately available, then Pi can wait until all
Pj have finished
Facts
Banker’s Algorithm
Assumptions
Multiple instances
Each process must a priori claim maximum use
When a process requests a resource it may have to wait
When a process gets all its resources it must return them in a finite amount of
time
Data Structure for Bankers’ Algorithm
Let n = number of processes, and m = number of resources types.
Available: Vector of length m. If available [j] = k, there are k instances of
resource type Rj available
Max: n x m matrix. If Max [i,j] = k, then process Pi may request at most k
instances of resource type Rj
Allocation: n x m matrix. If Allocation[i,j] = k then Pi is currently allocated k
instances of Rj
Need: n x m matrix. If Need[i,j] = k, then Pi may need k more instances of Rj to
complete its task
Need [i,j] = Max[i,j] – Allocation [i,j]
Safety Algorithm
1. Let Work and Finish be vectors of length m and n, respectively. Initialize:
Work = Available
Finish [i] = false for i = 0, 1, …, n- 1
2. Find and i such that both:
(a) Finish [i] = false
(b) Needi Work
If no such i exists, go to step 4
3. Work = Work + Allocationi
Finish[i] = true
go to step 2
4. If Finish [i] == true for all i, then the system is in a safe state
Resource Request Algorithm
Request = request vector for process Pi. If Requesti [j] = k then process Pi wants k
instances of resource type Rj
1. If Requesti Needi go to step 2. Otherwise, raise error condition, since process
has exceeded its maximum claim
2. If Requesti Available, go to step 3. Otherwise Pi must wait, since resources
are not available
3. Pretend to allocate requested resources to Pi by modifying the state as follows:
Available = Available – Request;
Allocationi = Allocation i + Requesti;
Needi = Needi – Requesti;
If safe the resources are allocated to Pi
If unsafe Pi must wait, and the old resource-allocation state is
restored
Deadlock Detection
Allow system to enter deadlock state
Detection algorithm
Recovery scheme
Recovery from Deadlock
A. Process Termination
Abort all deadlocked processes
Abort one process at a time until the deadlock cycle is eliminated
In which order should we choose to abort?
o Priority of the process
o How long process has computed, and how much longer to completion
o Resources the process has used
o Resources process needs to complete
o How many processes will need to be terminated
B. Resource Preemption
Selecting a victim – minimize cost
Rollback – return to some safe state, restart process for that state
Starvation – same process may always be picked as victim, include
number of rollback in cost factor
Memory Management
Program must be brought (from disk) into memory and placed within a process
for it to be run
Main memory and registers are only storage CPU can access directly
A pair of base and limit registers define the logical address space
Logical vs Physical Address Space
Logical and physical addresses are the same in compile-time and load-time
address-binding schemes; logical (virtual) and physical addresses differ in
execution-time address-binding scheme
Address Binding
o Execution time: Binding delayed until run time if the process can be
moved during its execution from one memory segment to another. Need
hardware support for address maps (e.g., base and limit registers)
In MMU scheme, the value in the relocation register is added to every address
generated by a user process at the time it is sent to memory
The user program deals with logical addresses; it never sees the real physical
addresses
Dynamic Loading
Dynamic Linking
Small piece of code, stub, used to locate the appropriate memory-resident library
routine
Stub replaces itself with the address of the routine, and executes the routine
Swapping
Backing store – fast disk large enough to accommodate copies of all memory
images for all users; must provide direct access to these memory images
Major part of swap time is transfer time; total transfer time is directly proportional
to the amount of memory swapped
Modified versions of swapping are found on many systems (i.e., UNIX, Linux,
and Windows)
Relocation registers used to protect user processes from each other, and from
changing operating-system code and data
Multiple-partition allocation
Best-fit: Allocate the smallest hole that is big enough; must search entire list,
unless ordered by size
Worst-fit: Allocate the largest hole; must also search entire list
Fragmentation
o Shuffle memory contents to place all free memory together in one large
block
o I/O problem
Paging
Divide physical memory into fixed-sized blocks called frames (size is power of 2,
between 512 bytes and 8,192 bytes)
Internal fragmentation
o Page number (p) – used as an index into a page table which contains
base address of each page in physical memory
o Page offset (d) – combined with base address to define the physical
memory address that is sent to the memory unit
Memory Protection
o “valid” indicates that the associated page is in the process’ logical address
space, and is thus a legal page
o “invalid” indicates that the page is not in the process’ logical address
space
Shared Pages
Shared code
o One copy of read-only (reentrant) code shared among processes (i.e., text
editors, compilers, window systems).
o Shared code must appear in same location in the logical address space of
all processes
o The pages for the private code and data can appear anywhere in the
logical address space
Hierarchical Paging
o This page table contains a chain of elements hashing to the same location
Virtual page numbers are compared in this chain searching for a match
Entry consists of the virtual address of the page stored in that real memory
location, with information about the process that owns that page
Decreases memory needed to store each page table, but increases time needed
to search the table when a page reference occurs
Use hash table to limit the search to one — or at most a few — page-table
entries
Segmentation
Memory-management scheme that supports user view of memory
A program is a collection of segments
o A segment is a logical unit such as: main program, procedure, function,
method, object, local variables, global variables, common block, stack,
symbol table, arrays
Logical address consists of a two tuple:
<segment-number, offset>,
Segment table – maps two-dimensional physical addresses; each table entry
has:
o base – contains the starting physical address where the segments reside
in memory
o limit – specifies the length of the segment
Segment-table base register (STBR) points to the segment table’s location in
memory
Segment-table length register (STLR) indicates number of segments used by a
program;
segment number s is legal if s < STLR
Protection
o With each entry in segment table associate:
validation bit = 0 illegal segment
read/write/execute privileges
Protection bits associated with segments; code sharing occurs at segment level
Since segments vary in length, memory allocation is a dynamic storage-
allocation problem
A segmentation example is shown in the following diagram
o Demand paging
o Demand segmentation
Demand Paging
o Faster response
o More users
Lazy swapper – never swaps a page into memory unless page will be needed
Page Fault
If there is a reference to a page, first reference to that page will trap to operating
system: page fault
4. Reset tables
Page Replacement
Prevent over-allocation of memory by modifying page-fault service routine to
include page replacement
Use modify (dirty) bit to reduce overhead of page transfers – only modified pages
are written to disk
Bring the desired page into the (newly) free frame; update the page and frame
tables
Restart the process
FIFO (First-in-First-Out)
A FIFO replacement algorithm associates with each page the time when that
page was brought into memory.
When a page must be replaced, the oldest page is chosen.
Belady’s Anomaly: more frames more page faults ( for some page-
replacement algorithms, the page-fault rate may increase as the number of
allocated frames increases.)
Ex-
OPTIMAL PAGE REPLACEMENT
Replace page that will not be used for longest period of time
Ex-
Allocation of Frames
Each process needs minimum number of pages
Two major allocation schemes
o fixed allocation
o priority allocation
Equal allocation – For example, if there are 100 frames and 5 processes, give
each process 20 frames.
Proportional allocation – Allocate according to the size of process
si size of process pi
S si
m totalnumber of frames
si
ai allocation for pi m
S
Global vs Local Allocation
Global replacement – process selects a replacement frame from the set of all
frames; one process can take a frame from another
Local replacement – each process selects from only its own set of allocated
frames
Thrashing
If a process does not have “enough” pages, the page-fault rate is very high. This
leads to:
o low CPU utilization
o operating system thinks that it needs to increase the degree of
multiprogramming
o another process added to the system
Thrashing a process is busy swapping pages in and out
MODULE-III
File System
File
Types:
o Data
numeric
character
binary
o Program
File Structure
o Lines
o Fixed length
o Variable length
Complex Structures
o Formatted document
Can simulate last two with first method by inserting appropriate control
characters
Who decides:
o Operating system
l Program
File Attribute
Time, date, and user identification – data for protection, security, and usage
monitoring
Information about files are kept in the directory structure, which is maintained on
the disk
File Types
File Operations
Open(Fi) – search the directory structure on disk for entry Fi, and move the
content of entry to memory
Close (Fi) – move the content of entry Fi in memory to directory structure on disk
reset position to n
rewrite n
Naming problem
Grouping problem
Path name
Efficient searching
No grouping capability
C. Tree Structure Directory
Efficient searching
Grouping Capability
File Sharing
Client-server model allows clients to mount remote file systems from servers
o Standard operating system file calls are translated into remote calls
Remote file systems add new failure modes, due to network failure, server failure
Recovery from failure can involve state information about status of each remote
request
Stateless protocols such as NFS include all information in each request, allowing
easy recovery but less security
Consistency semantics specify how multiple users are to access a shared file
simultaneously
Tend to be less complex due to disk I/O and network latency (for
remote file systems
File structure
An allocation method refers to how disk blocks are allocated for files:
A. Contiguous Allocation
n Random access
B. Linked Allocation
n Each file is a linked list of disk blocks: blocks may be scattered anywhere on the
disk.
n No random access
n Mapping
C. Indexed Allocation
n Random access
n Dynamic access without external fragmentation, but have overhead of index
block.
Disk Structure
Disk drives are addressed as large 1-dimensional arrays of logical blocks, where the
logical block is the smallest unit of transfer.
The 1-dimensional array of logical blocks is mapped into the sectors of the disk
sequentially.
o Sector 0 is the first sector of the first track on the outermost cylinder.
o Mapping proceeds in order through that track, then the rest of the tracks in
that cylinder, and then through the rest of the cylinders from outermost to
innermost.
Disk Scheduling
The operating system is responsible for using hardware efficiently — for the disk
drives, this means having a fast access time and disk bandwidth.
o Seek time is the time for the disk are to move the heads to the cylinder
containing the desired sector.
o Rotational latency is the additional time waiting for the disk to rotate the
desired sector to the disk head.
Disk bandwidth is the total number of bytes transferred, divided by the total time
between the first request for service and the completion of the last transfer.
FCFS
This algorithm is intrinsically fair, but it generally does not provide the fastest
service.
n Selects the request with the minimum seek time from the current head position.
The disk arm starts at one end of the disk, and moves toward the other end,
servicing requests until it gets to the other end of the disk, where the head
movement is reversed and servicing continues.
The head moves from one end of the disk to the other. servicing requests as it
goes. When it reaches the other end, however, it immediately returns to the
beginning of the disk, without servicing any requests on the return trip.
Treats the cylinders as a circular list that wraps around from the last cylinder to
the first one
C-LOOK
Version of C-SCAN
Arm only goes as far as the last request in each direction, then reverses direction
immediately, without first going all the way to the end of the disk.
Disk Management
Low-level formatting, or physical formatting — Dividing a disk into sectors that
the disk controller can read and write.
To use a disk to hold files, the operating system still needs to record its own data
structures on the disk.
The controller can be told to replace each bad sector logically with one of the
spare sectors. This scheme is known as sector sparing or forwarding.
A swap space can reside in one of two places: it can be carved out of the normal
file system, or it can be in a separate disk partition.
If the swap space is simply a large file within the file system, normal file-system
routines can be used to create it, name it, and allocate its space.
I/O Systems
I/O Hardware
A bus is a set of wires and a rigidly defined protocol that specifies a set of
messages that can be sent on the wires.
When device A has a cable that plugs into device B, and device B has a cable
that plugs into device C, and device C plugs into a port on the computer, this
arrangement is called a daisy chain. A daisy chain usually operates as a bus.
But the SCSI protocol is complex, the SCSI bus controller is often implemented
as a separate circuit board (or a host adapter) that plugs into the computer. It
typically contains a processor, microcode, and some private memory to enable it
to process the SCSI protocol messages.
Polling
Determines state of device
o command-ready
o busy
o Error
Most CPUs have two interrupt request lines. One is the nonmaskable interrupt,
which is reserved for events such as unrecoverable memory errors.
The second interrupt line is maskable: it can be turned off by the CPU before the
execution of critical instruction sequences that must not be interrupted.
o Character-stream or block
o Sequential or random-access
o Sharable or dedicated
o Speed of operation
o read-write, read only, or write only
Network Devices
o Difficult to use
Scheduling
o Key to performance
o i.e., Printing
Reference
Abraham Silberschatz, Greg Gagne, and Peter Baer Galvin,
"Operating System Concepts, Ninth Edition ",