II BSC IV SEM OS
II BSC IV SEM OS
UNIT- I
What is Operating System? History and Evolution of OS, Basic OS functions,
Resource Abstraction, Types of Operating Systems– Multiprogramming Systems,
Batch Systems, Time Sharing Systems; Operating Systems for Personal
Computers, Workstations and Hand- held Devices, Process Control & Real time
Systems.
UNIT- II
Processor and User Modes, Kernels, System Calls and System Programs, System
View of the Process and Resources, Process Abstraction, Process Hierarchy,
Threads, Threading Issues, Thread Libraries; Process Scheduling, Non-Preemptive
and Preemptive Scheduling Algorithms.
UNIT III
Process Management: Deadlock, Deadlock Characterization, Necessary and
Sufficient Conditions for Deadlock, Deadlock Handling Approaches: Deadlock
Prevention, Deadlock Avoidance and Deadlock Detection and Recovery.
Concurrent and Dependent Processes, Critical Section, Semaphores, Methods for
Inter- process Communication; Process Synchronization, Classical Process
Synchronization Problems: Producer-Consumer, Reader-Writer.
UNIT IV
Memory Management: Physical and Virtual Address Space; Memory Allocation
Strategies– Fixed and -Variable Partitions, Paging, Segmentation, Virtual Memory.
UNIT V
File and I/O Management, OS Security: Directory Structure, File Operations, File
Allocation Methods, Device Management, Pipes, Buffer, Shared Memory, Security
Policy Mechanism, Protection, Authentication and Internal Access Authorization
REFERENCE BOOKS:
1. Operating System Principles by Abraham Silberschatz, Peter Baer Galvin and
GregGagne (7thEdition) Wiley India Edition.
2. Operating Systems: Internals and Design Principles by Stallings (Pearson)
3. Operating Systems by J. Archer Harris (Author), Jyoti Singh (Author) (TMH)
The various file operations are creating/deleting files, backup the files, Mapping files
onto secondary memory etc.
4. Error Handling:
The various types of errors can occur, while a computer system is running.
These include internal and external hardware errors, such as memory errors, device
failure
errors etc.
In each
case, the OS
is
Process 1
Process 2 CPU
Process 3
This operating system picks and begins to execute one job from memory.
Once this job needs an I/O operation, then operating system switches to another job
(CPU and OS always busy).
Jobs in the memory are always less than the number of jobs on disk (Job Pool).
If several jobs are ready to run at the same time, then OS chooses which one to run
based on CPU Scheduling methods.
In Multiprogramming system, CPU will never be idle and keeps on processing.
2. Peer-to-Peer Model
1. Client-Server Model: In this model, the Client sends a resource request to the
Server, and the server provides the requested resource to the Client. The following
diagram shows the Client-Server Model.
Server
Network
2. macOS
macOS (previously called OS X) is a line of operating systems created by Apple.
It comes preloaded on all Macintosh computers, or Macs.
Some of the specific versions include Mojave (released in 2018), High Sierra (2017),
and Sierra (2016).
3. Solaris
Best for Large workload processing, managing multiple databases, etc.
Solaris is a UNIX based operating system which was originally developed by Sun
Microsystems in the mid-’90s.
In 2010 it was renamed as Oracle Solaris after Oracle acquired Sun Microsystems. It is
known for its scalability and several other features that made it possible such as Dtrace,
ZFS and Time Slider
4. Linux
The Linux was introduced by Linus Torvalds and the Free Software Foundation (FSF).
Linux (pronounced LINN-ux) is a family of open-source operating systems,
which means they can be modified and distributed by anyone around the world.
This is different from proprietary software like Windows, which can only be modified by
the company that owns it.
The advantages of Linux are that it is free, and there are many different distributions—or
versions—you can choose from.
5. Chrome OS
Best For a Web application.
Chrome OS is another Linux-kernel based operating software that is designed by Google. As
it is derived from the free chromium OS, it uses the Google Chrome web browser as its
principal user interface. This OS primarily supports web applications.
WORKSTATIONS
Process Control
Process Control Block is a data structure that contains information of the process related
to it. The process control block is also known as a task control block, entry of the process
table, etc.
A processor is an integrated electronic circuit that performs the calculations that run a
computer.
A processor performs arithmetical, logical, input/output (I/O) and other basic instructions
that are passed from an operating system (OS).
Most other processes are dependent on the operations of a processor.
The CPU is just one of the processors inside a personal computer (PC).
The Graphics Processing Unit (GPU) is another processor, and even some hard drives are
technically capable of performing some processing.
Processor Registers: A Register is a small memory, that resides in the processor. It provides
data quickly currently executing programs (process). A register can be 8-bit, 16-bit, 32-
bits, or 64-bit.
a) PC:PC stands for Program Counter .It contains the address of next instruction to be
executed.
b) IR:IR stands for Instruction Register. It stores the currently being executed instruction.
c) MAR:MAR stands for Memory Address Register. It stores the address of the data or
instruction, fetched from the main memory.
d) MBR:MBR stands for Memory Buffer Register.It stores the data or instruction fetched,
from the main memory. It then copied into Instruction Register (IR) for execution.
e) I/OAR: I/O AR stands for Input/Output Address Register. It specifies a particular I/O
device.
f) I/OBR:I/O BR stands for Input/Output Buffer Register. It is used for exchanging the
data between an I/O module and the processor.
and the mode bit is set to 1. The system control returns to kernel mode and the process
execution continues.
System Call
A system call is a way for programs to interact with the operating system.
A computer program makes a system call when it makes a request to the operating
system’s kernel.
System call provides the services of the operating system to the user programs via
Application Program Interface(API).
It provides an interface between a process and operating system. All programs needing
resources must use system calls.
Services Provided by System Calls :
1. Process creation and management
2. Main memory management
3. File Access, Directory & File system management
4. Device handling(I/O)
5. Protection
6. Networking, etc.
Types of System Calls : There are 5 different categories of system calls –
1. Process control: end, abort, create, terminate, allocate and free memory.
2. File management: create, open, close, delete, read file etc.
3. Device management
4. Information maintenance
5. Communication
System Programs
System Programming can be defined as the act of building Systems Software using System
Programming Languages.
Process Concepts
Process:
A Process is a program in the execution. A system consists of a collection of processes.
All the processes are executed in sequential fashion. Operating system processes are
executing the system code, and user processes are executing the user code.
Process Hierarchy
Process States:
The process state is defined as the current activity of the process.
A process goes through various states, during its execution.
The Operating system placed all the processes in a FIFO (First In First Out) queue for
execution.
A dispatcheris a program; it switches the processor from one process to another for
execution.
The different process states are as follows.
New Terminate
admitted
dispatch complete
Ready Running
timeout
Waiting
1. New State: The New state defines that, a process is being admitted (created) by an
operating system.
2. Ready State: The Ready state defines that, the process ready to execute. i.e., waiting for
a chance of execution.
3. Running State: The Running state defines that, the instructions of a process are being
executed.
4. Waiting State: The Waiting state defines that, the process is waiting for some event to
occur, such as the completion of an I/O operation. It is also known as Blocked state.
5. Terminated State: The Terminated state defines that, the process has finished its
execution. The process can be either completely executed or aborted for some reasons.
State Transitions of a Process
The process states are divided into different combinations
1. NullNew
2. NewReady
3. ReadyRunning
4. RunningTerminated
5. RunningReady
6. RunningWaiting
7. WaitingReady
New
Long-term Long-term
scheduler scheduler
Ready/
Suspend Ready Short-term Running Exit
Medium-term
scheduler scheduler
1. Long-Term Scheduler:
A Long-Term Scheduler determines, which programs are admitted to the system for
processing.
Once admitted a program, it becomes a process, and is added to the queue.
It controls the degree of Multi-programming. i.e., the no.of processes present in ready
state at any time.
The Long-Term Scheduler is also called as Job Scheduler.
2. Short-Term Scheduler:
The Short-Term Scheduler is also known as CPUScheduler or Dispatcher.
It decides which process will execute next in the CPU. i.e., Ready to Running state.
It also preempts the currently running process, to execute another process.
The main aim of this scheduler is, to enhance CPU performance and increase process
execution rate.
3. Medium-Term Scheduler:
The Medium-Term Scheduler is responsible for suspending and resuming the
processes.
It mainly does Swapping. i.e., moving processes from Main memory to secondary
memory and vice versa.
The Medium-Term Scheduler reduces the degree of Multi-programming.
Scheduling Algorithms
I. Non-Preemptive Algorithms:
A non-preemptive algorithm will not prevent currently running process. In this
case, once the process enters into CPU execution, it cannot be pre-empted, until it
completes its execution.
Ex: (1). First Come First Serve (FCFS)
(2). Shortest Job First (SJF)
II. Preemptive Algorithms:
A preemptive algorithm will prevent the currently running process. In this case,
the currently running process may be interrupted and moves to the Ready state. The
preemptive decision is performed, when a new process arrives or when an interrupt
occurs or a time-out occurs.
Ex: Round Robin (RR)
1) First Come First Serve [FCFS] Algorithm:
The FCFS algorithm is a simplest and straight forward scheduling algorithm.
It follows Non-Preemptive scheduling algorithm method.
In this algorithm, processes are executed on first-come and first-served basis.
This algorithm is easy to understand and implement.
The problem with this algorithm is, the average waiting time is too long.
Example: Consider the following processes that arrive at time 0.
Burst Time
Process
(Milliseconds)
P1 24
P2 3
P3 3
If the processes arrive in the order P1, P2, P3, then Gantt chart of this scheduling is
as follows.
P1 P2 P3
0 24 27 30
Jagan’s Degree & PG College Page 14 of 47
B.Sc-2 (Semester-IV) Operating Systems
P4 P1 P3 P2
0 3 9 16 24
3) Round Robin [RR] Algorithm:
The Round Robin scheduling algorithm was used in Time-sharing System.
It is one of the most widely used algorithms.
A fixed time (Quantum) is allotted to each process for execution.
If the running process doesn’t complete within the quantum, then the process is
preempted.
The next process in the ready queue is allocated the CPU for execution.
The problem with this algorithm is , the average waiting time is too long.
Example: Consider the following processes that arrive at time 0.
Burst Time
Process
(Milliseconds)
P1 24
P2 3
P3 3
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
Threads
A Thread is also called as “Light Weight Process” (or) a single unit of a process.
Thread has its own Program Counter (PC), a register set, and a stack.
It shares some information from other threads like process code, data, and open files.
A traditional process has a single thread of control. It is also called “Heavy Weight
Process”.
If the process contains multiple threads of control, then it can do more than one task at a
time.
Many software packages that run on modern computers are multi threaded.
For Example, MS-Word software uses multiple threads like performing spelling and
grammar checking in background, auto save, etc.
Reference Image:
Threading Issues
Following threading issues are:
a) The fork() and exec() system call
b) Signal handling
c) Thread cancelation
d) Thread Pools
e) Thread local storage
a. The fork() and exec() system calls
The fork() is used to create a duplicate process.
The meaning of the fork() and exec() system
calls change in a multithreaded program.
If a thread calls the fork(), does the new process duplicate all threads
If a thread calls the exec() system call, the program specified in the parameter to exec()
will replace the entire process which includes all threads.
b. Signal Handling
Generally, signal is used in UNIX systems to notify a process that a particular event
has occurred.
A signal received either synchronously or asynchronously, based on the source of and
the reason for the event being signaled.
All signals, whether synchronous or asynchronous, follow the same pattern as given
below
A signal is generated by the occurrence of a particular event.
The signal is delivered to a process.
Once delivered, the signal must be handled.
c. Cancellation
Termination of the thread in the middle of its execution is called ‘thread cancellation.
Threads that are no-longer required can be cancelled by another thread in one of two
techniques:
1. Asynchronies cancellation
2. Deferred cancellation
1. Asynchronies Cancellation
It means cancellation of thread immediately
2. Deferred Cancellation
In this method a flag is sets that indicating the thread should cancel itself when it is
feasible
For example − If multiple database threads are concurrently searching through a
database and one thread returns the result the remaining threads might be cancelled.
d. Thread polls
Multithreading in a web server, whenever the server receives a request it creates a
separate thread to service the request
A thread pool is to create a number of threads at process start-up and place them into a
pool, where they sit and wait for work.
e. Thread Local Storage
The benefit of using threads in the first place is that Most data is shared among the
threads but, sometimes threads also need thread explicit data.
The Major libraries of threads are pThreads, Win32 and java which provide support for
thread specific which is called as TLS thread local storage
Thread Libraries
Thread libraries provide programmers with an Application Program Interface for
creating and managing threads.
Thread libraries may be implemented either in user space or in kernel space
There are two primary ways of implementing thread library, Those are
The first way is to provide a library entirely in user space with kernel support
The second way is to implement a kernel level library supported directly by the
operating system.
There are Three Main Thread Libraries in use today:
1. POSIX Pthreads - may be provided as either a user or kernel library, as an extension to
the POSIX standard.
pThreads are available on Solaris, Linux, Mac OSX, Tru64, and via public domain
shareware for Windows.
Global variables are shared amongst all threads.
One thread can wait for the others to rejoin before continuing.
2. Win32 threads - provided as a kernel-level library on Windows systems.
It is Similar to pThreads.
3. Java threads –
Since Java generally runs on a Java Virtual Machine,
The implementation of threads is based upon whatever OS and hardware
The JVM is running on, i.e. either Pthreads or Win32 threads depending on the
system.
Deadlock
Deadlock: “Deadlock is a situation, when a set of processes are blocked, because each
process is loading a resource, and waiting for another resource, acquired by some other
process”. (or) The Deadlock is a situation when several processes may compete for a
finite number of resources.
In a multiprogramming system, a process requests a resource, and if the resource is
not available then the process enters a waiting state. The Waiting process may never change
state, because the resources are held by other waiting process. This situation is called a
Deadlock.
Consider the following Resource Allocation Graph.
R1
Assigned to
Waiting for
P P
Assigned to
Waiting for
R2
From the above Resource allocation graph, process P1 is holding the resource R1, and
waiting for the resource R2, which is assigned by process P2, and process P2 is waiting for
resource R2. This situation is called Deadlock.
4) Circular wait: If processes are waiting for resources in a circle. For example, P1 is
holding resource R1, and is waiting for resource R2. Similarly, P2holding the resource
R2, and is waiting for resource R1.
R1
Assigned to
Waiting for
P P
Assigned to
Waiting for
R2
R1Wa it
Circular
Held
Request
P P
Held
Request
R2
No Deadlock
R1 R1
P P
From the above all diagrams, P1, P2 represents processes. R1, R2 represents
resources. Dot ( ) represents each instance of that resource.
For example, if P1process is allocated R5 resource. Now next time, if P1 ask for R4, R3,
which are lesser than R5, such request will not be granted. Only request for resources,
more than R5 will be granted.
Unsafe
Deadlock
Safe
Resource allocation:
Consider a system with a finite number of processes and finite number of resources. At
any time a process may have zero or more resources allocated to it.The state of the system is
reflected by the current allocation of resources to processes. The state may be safe state or
unsafe state.
Safe State:
Unsafe State:
The Detection algorithm that examines the state of the system, to detect whether a
deadlock has occurred.
The Recovery algorithm is used to recover from the deadlock.
1. Dead Lock Detection: Deadlock detection is the process of whether a deadlock exists
or not, and identify the processes and resources involved in the deadlock. The basic idea
is, to check allocation of resource availability, and to determine if the system is in
deadlocked state.
Detection strategies do not restrict process actions. With deadlock detection, requested
resources are granted to processes whenever possible. Periodically, the OS performs an
algorithm, to detect the circular wait condition.
1. A deadlock exists, if and only if, there are unmarked processes at the end of the
algorithm.
2. Each unmarked process is deadlocked.
3. The strategy in this algorithm is to find a process, whose request can be satisfied
with the available resources.
2. Deadlock Recovery: When a detection algorithm finds that a deadlock exists, then
several recovery methods used.
a) Process Termination: To eliminate deadlocks by aborting a process, we use one of two
methods. In both methods, the system reclaims all resources allocated to the
terminated processes.
1. Abort all deadlocked processes: This method clearly will break the deadlock
cycle. These processes are computed for a long time, and the results of these
partial computations must be discarded, and recomputed later.
2. Abort one process at a time, until the deadlock cycle is eliminated:This
method is very complicated to implement, even after each process is aborted.A
deadlock-detection algorithm determines, whether any processes are still
deadlocked.
b) Resource Preemption: Resources are preempted from the processes that are involved
in deadlock. Then preempted resources are allocated to other processes. So that, there
is a possibility of recovering the system from deadlock.
Concurrency Condition
Concurrency means that an application is making progress on more than one task at the
same time (concurrently). Well, if the computer only has one CPU the application may
not make progress on more than one task at exactly the same time, but more
than one task is being processed at a time inside the application. It does notcompletely
finish one task before it begins the next.
There are several kinds of concurrency. In a single processor operating system, there
really is little point to concurrency except to support multi users, or support threads that
are likely to become blocked waiting on I/O and you don't want to waste CPU cycles. In a
multi-processor or core system, then concurrency can greatly speed up some throughput.
Process Synchronization
Process Synchronization means sharing system resources by processes in such a way
that, Concurrent access to shared data is handled thereby minimizing the chance of
inconsistent data. Maintaining data consistency demands mechanisms to ensure
synchronized execution of cooperating processes.
Example:
While (1)
{
Entry Section;
Critical
Section;Exit Section;
Remainder Section;
}
A solution to the critical section problem must satisfy the following three conditions.
Semaphores
Semaphore is a synchronization tool defined by Dijkstra in 1965 for managing
concurrent process by using the value of simple variable.
Semaphore is a simply a variable. This variable is used to solve critical section problem
and to achieve process synchronization in the multi processing environment.
For the solution to the critical section problem one synchronization tool is used which
is known assemaphores. A semaphore ‘S‘ is an integer variable which is accessed
through two standard
operations such as wait and signal. These operations were originally termed ‘P‘ (for
wait means to test) and ‘V‘ (for single means to increment). The classical
definition of wait is
Wait (S)
{
While (S <= 0)
{
Test;
}
S--;
}
The classical definition of the
signal isSignal (S)
{
S++;
}
In case of wait the test condition is executed with interruption and the
decrement is executedwithout interruption.
Wait: The wait operation decrements the value of its argument S, if it is positive. If Sis
negative or zero, then no operation is performed.
Types of Semaphores:
Binary Semaphore:
A binary semaphore is a semaphore with an integer value which can range
between 0 and 1. Let ‘S‘ be a counting semaphore. To implement the binary
semaphore we need following the structure of data.
Binary Semaphores
S1, S2;int C;
Initially S1 = 1, S2 = 0 and the value of C is set to the initial value of the counting
semaphore ‗S‘.Then the wait operation of the binary semaphore can be
implemented as follows.
Wait (S1)
C--;
if (C < 0)
{
Signal (S1);
Wait (S2);
} Signal (S1);
The signal operation of the binary semaphore can be implemented
as follows:
Wait (S1);
C++;
if (C <=0)
Signal (S2);
Else
Signal (S1);
assumed that the pool consists of ‘N‘ buffer and each capable of holding one item.
The ‘mutex‘ semaphore provides mutual exclusion for access to the buffer pool and
is initialized to the value one. The empty and full semaphores count the number of
empty and full buffer respectively. The semaphore empty is initialized to ‘N‘ and the
semaphore full is initialized to zero. This problem is known as procedure and
consumer problem. The code of the producer is producing full buffer and the
code of consumer is producing empty buffer. The structure of producer process is
as follows:
do {
produce an item in nextp
............
Wait (empty);
Wait (mutex);
...........
add nextp to buffer
............
Signal (mutex);
Signal (full);
} While (1);
The structure of consumer process is as follows:
do {
Wait (full);
Wait (mutex);
...........
Remove an item from buffer to nextc
...........
Signal (mutex);
Signal (empty);
............
Consume the item in nextc;
. . . . . . . .. . . .. .
} While (1);
Reader Writer Problem: In this type of problem there are two types of process
are used such as Reader process and Writer process. The reader process is
responsible for only reading and the writer process is responsible for writing.
This is an important problem of synchronization which has several variations
like
o The simplest one is referred as first reader writer problem which
requires that no reader will be kept waiting unless a writer has
obtained permission to use the sharedobject. In other words no
reader should wait for other reader to finish because a writer is
waiting.
o The second reader writer problem requires that once a writer is
ready then the writerperforms its write operation as soon as
possible.
The structure of a reader process is as
follows:Wait (mutex);
Read count++;
if (read count == 1)
Wait (wrt);
Signal (mutex);
...........
Reading is performed
...........
Wait (mutex);
Read count --;
if (read count == 0)
Signal (wrt);
Signal (mutex);
The structure of the writer process is as follows:
Wait (wrt);
Writing is performed;
Signal (wrt);
UNIT - 4 Memory
EnM
d aUnnaitg-e
4ment & Virtual Memory
1 Symbolic addresses :- The addresses used in a source code. The variable names,
constants, and instruction labels are the basic elements of the symbolic address space.
2 Relative addresses :- At the time of compilation, a compiler converts symbolic addresses
into relative addresses.
3 Physical addresses :- The loader generates these addresses at the time when a program is
loaded into main memory.
The set of all logical addresses generated by a program is referred to as a logical
address space. The set of all physical addresses corresponding to these logical addresses is
referred to as a physical address space.
Memory Allocation
One of the simplest methods for allocating memory is to divide memory into several fixed-
sized partitions.
Each partition may contain exactly one process. Thus, the degree of multiprogramming is
bound by the number of partitions.
In this multiple partition method, when a partition is free, a process is selected from the
input queue and is loaded into the free partition.
When the process terminates, the partition becomes available for another process.
In the Variable partition method, the operating system keeps a table, indicating which
parts of memory are available and which are occupied.
Initially, all memory is available for user processes and is considered one large block of
available memory, a hole.
The first fit, best fit and worst fit strategies are the most commonly used schemes to select
a free hole from the set of available holes.
First fit: Allocate the first hole that is big enough. Searching can start either at the beginning
of the set of holes or at the location where the previous first-fit search ended. We can stop
searching as soon as we find a free hole that is large enough.
Best fit: Allocate the smallest hole that is big enough. We must search the entire list, unless
the list is ordered by size. This strategy produces the smallest leftover hole.
Worst fit: Allocate the largest hole. Again, we must search the entire list, unless it is sorted by
size. This strategy produces the largest leftover hole, which may be more useful than the smaller
leftover hole from a best-fit approach.
2. Variable Partitioning :
The variable partitioning is a contiguous memory management technique
in the main memory is not divided into partition
The space which is left is considered as the free space which can be further used by other
processes.
It also provides the concept of compaction.
In compaction the spaces that are free and the spaces which not allocated to the process
are combined and single large memory space is made.
Paging
A computer can address more memory than the amount physically installed on
the system. This extra memory is actually called virtual memory and it is a section of
a hard that's set up to emulate the computer's RAM. Paging technique plays an
important role in implementing virtual memory.
Address Translation
Page address is called logical address and represented by page number and the
offset.
Frame address is called physical address and represented by a frame number and
the offset.
A data structure called page map table is used to keep track of the relation between a
page of a process to a frame in physical memory.
When the system allocates a frame to any page, it translates this logical address into a
physical address and create entry into the page table to be used throughout execution of the
program
Advantages and Disadvantages of Paging
Paging reduces external fragmentation, but still suffer from internal fragmentation.
Paging is simple to implement and assumed as an efficient memory management
technique.
Due to equal size of the pages and frames, swapping becomes very easy.
Page table requires extra memory space, so may not be good for a system having small
RAM
Segmentation
Segmentation is another way of dividing the addressable memory. It is another
scheme of memory management and it generally supports the user view of memory.
The Logical address space is basically the collection of segments. Each segment has a
name and a length.
Basically, a process is divided into segments. Like paging, segmentation divides
or segments the memory. But there is a difference and that is while the paging
divides the memory into a fixed size and on the other hand, segmentation divides the
memory into variable segments these are then loaded into logical memory space.
A Program is basically a collection of segments. And a segment is a logical unit
such as:
Main program
Procedure
Function
Method
Object
Local variable and global variables.
Symbol table
Common block
Stack
Arrays
Types of Segmentation
Given below are the types of Segmentation:
Virtual Memory Segmentation With this type of segmentation, each process is
segmented into n divisions and the most important thing is they are not
segmented all at once.
Simple Segmentation With the help of this type, each process is segmented
into n divisions and they are all together segmented at once exactly but at the
runtime and can be non-contiguous (that is they may be scattered in the
memory).
Characteristics of Segmentation
Some characteristics of the segmentation technique are as follows:
The Segmentation partitioning scheme is variable-size.
Partitions of the secondary memory are commonly known as segments.
Partition size mainly depends upon the length of modules.
Thus with the help of this technique, secondary memory and main memory are
divided into unequal-sized partitions.
Advantages of Segmentation
1. No internal fragmentation
2. Average Segment Size is larger than the actual page size.
3. Less overhead
4. It is easier to relocate segments than entire address space.
5. The segment table is of lesser size as compared to the page table in paging.
Disadvantages
1. It can have external fragmentation.
2. it is difficult to allocate contiguous memory to variable sized partition.
3. Costly memory management algorithms
2. Two-Level Directory: A Two-level directory contains separate directories for each user.
Each directory can contain subdirectories. So, different users may create files with the
same name within a directory.
\ Root Directory
Dir C File1
File2 File3
FILE OPERATIONS IN OS
A file is a collection of related information. The files are stored in secondary storage
devices. In general, a file is a sequence of bits, bytes, lines, or records.
The information in a file is defined by its creator. Different types of information may be
stored in a file. The information may be source programs, object programs, executable
programs, numeric data, text, images, sound recordings, video information, and so on.
The file rename operation is used to change the name of the existing file.
1. Contiguous Allocation
In this scheme, each file occupies a contiguous
set of blocks on the disk.
For example, if a file requires n blocks and is
given a block b as the starting location, then the blocks
assigned to the file will be: b, b+1, b+2,……b+n-1. This
means that given the starting block address and the
length of the file (in terms of blocks required), we can
determine the blocks occupied by the file.
The directory entry for a file with contiguous allocation
contains
Address of starting block
Length of the allocated portion.
The file ‘mail’ in the following figure starts from the
block 19 with length = 6 blocks. Therefore, it occupies 19,
20, 21, 22, 23, 24 blocks.
2. Linked List Allocation
In this scheme, each file is a linked list of disk
blocks which need not be contiguous. The disk blocks can
be scattered anywhere on the disk.
The directory entry contains a pointer to the starting and
the ending file block. Each block contains a pointer to the
next block occupied by the file.
The file ‘jeep’ in following image shows how the
blocks are randomly distributed. The last block (25) contains
-1 indicating a null pointer and does not point to any other
block.
3. Indexed Allocation
In this scheme, a special block known as the Index block
contains the pointers to all the blocks occupied by a file.
Each file has its own index block. The ith entry in the
index block contains the disk address of the ith file block. The directory entry contains the
address of the index block as shown in the image:
Device Management in Operating System
Device management means controlling the Input/Output devices like disk, microphone,
keyboard, printer, magnetic tape, USB ports, etc.
A process may require various resources, including main memory, file access, and access
to disk drives, and others.
If resources are available, they could be allocated, and control returned to the CPU.
Otherwise, the procedure would have to be postponed until adequate resources become
available.
The system has multiple devices, and in order to handle these physical or virtual devices,
the operating system requires a separate program known as device controller. It also
determines whether the requested device is available.
The fundamentals of I/O devices may be divided into three categories:
1. Boot Device
2. Character Device
3. Network Device
1. Boot Device
It stores data in fixed-size blocks, each with its unique address. For example- Disks.
2. Character Device
It transmits or accepts a stream of characters, none of which can be addressed individually.
For instance, keyboards, printers, etc.
3. Network Device
It is used for transmitting the data packets.
File Protection
File protection is keeping information safe in a computer system from physical damage
and improper access. Physical damage of files in the disks can occur due to hardware
problems. Improper access is due to misuse of files by the unauthorized users.
1. Protection from Physical damage:
Protection from physical damage is, generally provided by maintaining duplicate copies
of files.
Many computers have systems programs that automatically copy files to tape regularly
(once per day or week or month) to maintain a copy.
The administrator or the user must maintain this procedure to protect important
information.
Reasons for physical damage: File systems can be damaged by various reasons. Some of
them are:
Continuous use of hardware: Disks can be damaged due to continuous use of
reading and writing of files
Power failures: Frequent Power (electrical) problems can damage the system
physically.
Sabotage: Sabotage means intentional damage.
Step 1 − Create two pipes. First one is for the parent to write and child to read, say as pipe1.
Second one is for the child to write and parent to read, say as pipe2.
Step 2 − Create a child process.
Buffer
The buffer is an area in the main memory used to store or hold the data temporarily.
In other words, buffer temporarily stores data transmitted from one place to another,
either between two devices or an application.
The act of storing data temporarily in the buffer is called buffering.
Types of Buffering:-
There are three main types of buffering in the operating system, such as:
1. Single Buffer
In Single Buffering, only one buffer is used to transfer the data between two devices.
The producer produces one block of data into the buffer. After that, the consumer
consumes the buffer. Only when the buffer is empty, the processor again produces the data.
2. Double Buffer
In Double Buffering, two schemes or two buffers are used in the place of one.
In this buffering, the producer produces one buffer while the consumer consumes
another buffer simultaneously. So, the producer not needs to wait for filling the buffer.
Double buffering is also known as buffer swapping.
3. Circular Buffer
When more than two buffers are used, the buffers' collection is called a circular
buffer.
Each buffer is being one unit in the circular buffer. The data transfer rate will increase
using the circular buffer rather than the double buffering.
Buffering Works
In an operating system, buffer works in the following way:
Shared Memory
Shared memory is a memory shared between two or more processes.
Each process has its own address space;
if any process wants to communicate with some information
from its own address space to other processes, then it is only
possible with IPC (inter-process communication) techniques.
Shared memory is the fastest inter-process communication
mechanism.
The operating system maps a memory segment in the address
space of several processes to read and write in that memory segment without calling
operating system functions.
To use shared memory, we have to perform two basic steps:
1. Request a memory segment that can be shared between processes to the operating
system.
2. Associate a part of that memory or the whole memory with the address space of the
calling process.
System security may be threatened through two violations, and these are as follows:
1. Threat
2. Attack
There are several goals of system security. Some of them are as follows:
1. Integrity
2. Secrecy
The system's objects must only be available to a small number of authorized users.
The system files should not be accessible to everyone.
3. Availability
All system resources must be accessible to all authorized users, i.e., no single
user/process should be able to consume all system resources
Access Authorization
The authorization process, person’s or user’s authorities are checked for accessing the
resources.
whereas authorization process is done after the authentication process.
Authorization is the process of giving permission to do or have something.
The system administrator defines for the system which users are allowed access the file
directories, hours of access, amount of allocated storage space, etc.