Directorate of Distance Education Kurukshetra University Kurukshetra-136119 Pgdca/Msc. (CS) - 1/mca-1
Directorate of Distance Education Kurukshetra University Kurukshetra-136119 Pgdca/Msc. (CS) - 1/mca-1
Kurukshetra University
Kurukshetra-136119
PGDCA/MSc. (cs)-1/MCA-1
STRUCTURE
1. Introduction
2. Objective
3. Presentation of Contents
1
3.4.2 Multi Programming Operating System
3.4.5 Multithreading
4. Summary
1. INTRODUCTION
The Operating System (OS) is system software, which acts as an interface between a user
of the computer and the computer hardware. The main purpose of an Operating System is
collection of software consisting of procedures for operating the computer and providing
the computer to work together smoothly and efficiently. Operating system is also referred
The primary need for the Operating System arises from the fact that the user needs to be
provided with services and Operating System ought to facilitate the provisioning of these
services. The central part of a computer system is a processing engine called CPU. A
2
system should make it possible for a user’s application to use the processing unit. A user
application would need to store information. The Operating System makes memory
available to an application when required. Similarly, user applications need some way to
input so as to communicate with the application. This is often in the form of a key board,
In the same manner, applications may require outputs that may be generated by monitor
or printer. Output may be a video or audio file. So these devices are also managed by
Operating System.
All the applications being used in a computer may require resources for Processing,
information.
These service facilities are provided by an operating system regardless of the nature of
application. The Operating System offers generic services to support all the above
operations. These operations in turn facilitate the applications mentioned earlier. To that
In order to understand operating systems, we must understand the computer hardware and
the development of Operating System from beginning. Hardware means the physical
machine and its electronic components including memory chips, input/output devices,
storage devices and the central processing unit. Software is the program written for these
computer systems. Main memory is where the data and instructions are stored to be
processed. Input / Output devices are the peripherals attached to the system, such as
keyboard, printers, disk drives, CD drives, magnetic tape drives, modem, monitor, etc.
The central processing unit is the brain of the computer system; it has circuitry to control
3
the interpretation and execution of instructions. It controls the operation of entire
computer system. All of the storage references, data manipulations and I/O operations are
performed by the CPU. There are four components of computer systems i.e. Hardware,
The hardware provides the basic computing power. The system programs provide the
way in which these resources are used to solve the computing problems of the users.
There may be many different users trying to solve different problems. The Operating
System controls and coordinates the use of the hardware among the various users and the
application programs.
Application programs
Operating System
Computer Hardware
1.
4
We can view an Operating System as a resource allocator. A computer system has many
resources, which are to be required to solve a computing problem. These resources are
the CPU time, memory space, files storage space, input/output devices and so on. The
Operating System acts as a manager of all of these resources and allocates them to the
specific programs and users as needed by their tasks. Since there can be many conflicting
requests for the resources, the Operating System must decide which requests are to be
An Operating System can also be viewed as a control program, used to control the
various I/O devices and the users programs. A control program controls the execution of
the user programs to prevent errors and improper use of the computer resources. It is
especially concerned with the operation and control of I/O devices. As stated above the
fundamental goal of computer system is to execute user programs and solve user
problems. To achieve this goal computer hardware is constructed. But the bare hardware
is not easy to use and for this purpose application/system programs are developed. These
input/output devices and the use of CPU time for execution. The common functions of
controlling and allocation of resources between different users and application programs
is brought together into one piece of software called operating system. It is easy to define
operating systems by what they do rather than what they are. The primary goal of the
operating systems is to make the use of computer easy. Operating systems make it easier
to compute. A secondary goal is efficient operation of the computer system. The large
computer systems are very expensive, and so it is desirable to make them as efficient as
possible. Operating systems thus makes the optimal use of computer resources.
5
2. OBJECTIVE
In the present lesson, the functions of operating systems as resource manager such as
In the later part of the lesson, various types of Operating Systems such as Batch
3. PRESENTATION OF CONTENTS
The operating system is manager of resources. We shall be discussing these resources and
Viewing operating system as a resources manager is just one of the three standard views
of an operating system. Different views being hierarchical view and extended machine
view. The first view is that the operating system may be a assortment of programs
devices, and knowledge (programs and data) of these resources are valuable, and it's the
function of operating system to check that they're used expeditiously and to resolve
conflicts arising from competition among the varied users. The operating system should
keep track of standing of every resource; decide that method is to urge the resource (how
6
a lot of, where, and when), apportion it, and eventually reclaim it. We tend to classify
Memory
Processors
Input/Output Devices
Information.
Viewing the O/S as a resource manager, every manager should do the following:
We've got classified all the operating system programs into different classes in direct
correspondence with various classes of resources. Here we've got mentioned major
functions of every class of programs and typical name of program/module where it exists.
To execute a program, it must be mapped to absolute addresses and loaded into memory.
As the program executes, it accesses instructions and data from memory by generating
these absolute addresses. The Operating System is responsible for the following memory
management functions:
Keep track of every physical memory location within the system. What elements
area unit in use and by whom? What elements don't seem to be in use (called
free)?
7
It decides that method gets memory required for execution. It allocates the
memory once the method requests it and allocation policy permits it.
Reclaim the memory once the method no needs it or has been terminated.
entity, process is an active entity performing the intended functions of its related
program. A process needs certain resources like CPU, memory, files and I/O devices. In
in the system. The Operating System is responsible for the following processor/ process
management functions:
Keeps track of processor and status of processes. The programs that will do this
Decide who (which process) can have an opportunity to use the processor; job
scheduler (also called as long term scheduler) chooses from all the submitted jobs
and decides which one are going to be allowed into the system, i.e., have its stint
with CPU . If multiprogramming, decide which process gets the processor, when,
for how much of time. The module that does this is called a process scheduler (or
Reclaim processor once process stops to use a processor, or exceeds the allowed
quantity of usage.
It may be noted that in creating the choice on that job gets into the system several factors
are taken into account. As an example, a job that is requesting a lot of memory or tape
8
drives than the system has shouldn't be permitted into the system. That is, it should not
have any resource assigned to it, not even the main memory.
The Operating System is responsible for the following I/O Device Management
Functions:
Keep track of the resource (I/O devices, I/O channels, etc.). This module is often
Decide an efficient way to allocate the I/O resource. If it is shared, then decide
who gets it, what quantity of it's to be allotted, and for how long. This can be
Reclaim device as and once its use is over. In most cases I/O terminates
automatically.
The method of deciding how devices are allotted depends whether or not an I/O device is
The major information management functions for which Operating system is responsible
are:
Keeps track of the information its location, its usage, status, etc. The module
9
Decides who gets hold of information, apply protection mechanism, and provides
An Operating System is responsible for the computer system networking via a distributed
memory, clock pulse or any peripheral devices. Instead, each processor is having its own
clock pulse, and RAM and they communicate through network. Access to shared
There is a necessity to identify the system resources that has got to be managed by the
operating system and using the process viewpoint, we have a tendency to indicate once
the corresponding resource manager comes into play. We have a tendency to currently
answer the question, “How are these resource managers activated, and where do they
reside?” Will memory manager ever invoke the process scheduler? Will scheduler ever
call upon the services of memory manager? Is the process concept only for the user or is
Here, first we present the concept of bare machine – a computer without its software
clothing. A bare machine does not provide the environment which most programmers’
desire for. The instructions to do resource management functions are typically not
provided by the bare machine and they form a part of operating system services. The user
10
program will request these services by issuing special supervisor instructions. These
So the operating system gives several instructions additionally to the bare machine
instructions. Instructions that form a part of bare machine and those given by the
The operating system kernel runs on the bare machine; user programs run on the
extended machine. This means that the kernel of operating system is written by using the
instructions of bare machine only; whereas the users can write their programs with the
In theory, every computer system may be programmed in its machine language, with no
systems software support. Programming of the bare machine was customary for early
computer systems. Programming of the bare machine results in low productivity of both
users and hardware. The long and tedious process of program and data entry practically
The next significant evolutionary step in computer-system usage came about with the
advent of input/output devices, such as punched cards and paper tape, and of language
program, called the loader, automates the process of loading executable programs into
memory. The user places a program and its input data on an input device, and the loader
transfers information from that input device into memory. After transferring control to
11
the loader program by manual or automatic means, execution of the program commences.
The executing program reads its input from the designated input device and may produce
some output on an output device, such as a printer or display screen. Once in memory,
quite slow and cumbersome due to serial execution of programs and to numerous manual
operations involved in the process. In a typical sequence, the editor program is loaded to
prepare the source code of the user program. The next step is to load and execute the
language translator and to provide it with the source code of the user program. When
serial input devices, such as card reader, are used, multiple-pass language translators may
require the source code to be repositioned for reading during each pass. If syntax errors
are detected, the whole process must be repeated from the beginning. Eventually, the
object code produced from the syntactically correct source code is loaded and executed.
If run-time errors are detected, the state of the machine can be examined and modified by
The mode of operation described here was initially used in late fifties, but it was also
In addition to language translators, systems software includes the loader and possibly
editor and debugger programs. Most of them use input/output devices and thus must
contain some code to exercise those devices. Since many user programs also use
routines for the use of all programs. This realization led to a progression of
implementations, ranging from the placing of card decks with I/O routines into the user
12
code, to the eventual collection of pre-compiled routines and the use of linker and
In the described system, I/O routines and the loader program represent a rudimentary
for beyond what is available on the bare machine. Language translators, editors, and
debuggers are systems programs that rely on the service of, but are not generally regarded
as part of, the operating system. For example, the language translator would normally
use the provided I/O routines to obtain its input (the source code) and to produce the
output.
Although a definite improvement over the bare-machine approach, this mode of operation
is obviously not very efficient. Running of the computer system may require frequent
manual loading of programs and data. This results in low utilization of system resources.
User productivity, especially in multi-user environments, is low as users wait their turn at
the machine. Even with such tools as editors and debuggers, program development is
very slow and is ridden with manual program and data loading.
The next logical step in the evolution of operating systems was to automate the
program development. The intent was to increase system resource utilization and
Even when automated, housekeeping operations such as mounting of tapes and filling out
log forms take a long time relative to processors and memory speeds. Since there is not
13
much that can be done to reduce these operations, system performance may be increased
programs are "batched" together on a single input tape for which housekeeping
operations are performed only once, the overhead per program is reduced accordingly. A
related concept, sometimes called phasing, is to prearrange submitted jobs so that similar
must be executed automatically, without slow human intervention. To this end some
means must be provided to instruct the operating system how to treat each individual job.,
embedded in the batch stream. Operating system commands are statements written in Job
A memory-resident portion of the batch operating system- sometimes called the batch
monitor- reads, interprets, and executes these commands. In response to them, batch jobs
are executed one at a time. A job may consist of several steps, each of which usually
involves loading and execution of a program. For example, a job may consist of
Job-END command is encountered, the monitor may look for another job, which may be
By reducing or eliminating component idle time due to slow manual operations, batch
processing offers a greater potential for increased system resource utilization and
throughput than simple serial processing, especially in computer systems that serve
14
multiple users. As far as program development is concerned, batch is not a great
improvement over the simple serial processing. The turnaround time, measured from the
time a job is submitted until its output is received, may be quite long in batch systems.
Phasing may further increase the turnaround time by introducing additional waiting for a
complete batch of the given kind to be assembled. Moreover, programmers are forced to
debug their programs offline using post-mortem memory dumps, as opposed to being
able to examine the state of the machine immediately upon detection of a failure.
reconstruct the state of the system just on the basis of an after-the-fact memory snapshot.
With sequencing of program execution mostly automated by batch operating systems, the
speed discrepancy between fast processors and comparatively slow I/O devices, such as
improvements in batch processing were mostly along the lines of increasing the
Many single-user operating systems for personal computers basically provide for serial
processing. User programs are commonly loaded into memory and executed in response
to user commands typed on the console. A file management system is often provided for
program and data storage. A form of batch processing is made possible by means of files
consisting of commands to the operating system that are executed in sequence. Command
files are primarily used to automate complicated customization and operational sequences
of frequent operations.
3.3.3 Multiprogramming
15
Early computers ran one process at a time. While the process waited for servicing by
another device, the CPU was idle. In an I/O intensive process, the CPU could be idle as
much as 80% of the time. Advancements in operating systems led to computers that load
several independent processes into memory and switch the CPU from one job to another
when the first becomes blocked while waiting for servicing by another device. This idea
the throughput of the system by efficiently using the CPU time. In multiprogramming,
many processes are simultaneously resident in memory, and execution switches between
several programs are run at the same time on a uniprocessor. Since there is only one
processor, there can be no true simultaneous execution of different programs. Instead, the
operating system executes part of one program, then part of another, and so on. To the
user it appears that all programs are executing at the same time. The advantages of
multiprogramming are the same as the commonsense reasons that in life you do not
always wait until one thing has finished before starting the next thing. Specifically:
More efficient use of computer time. If the computer is running a single process, and
the process does a lot of I/O, then the CPU is idle most of the time. This is a gain as
long as some of the jobs are I/O bound -- spend most of their time waiting for I/O.
Faster turnaround if there are jobs of different lengths. Consideration (1) applies only
if some jobs are I/O bound. Consideration (2) applies even if all jobs are CPU bound.
For instance, suppose that first job A, which takes an hour, starts to run, and then
immediately afterward job B, which takes 1 minute, is submitted. If the computer has
to wait until it finishes A before it starts B, then user A must wait an hour; user B
16
must wait 61 minutes; so the average waiting time is 60.5 minutes. If the computer
can switch back and forth between A and B until B is complete, then B will complete
after 2 minutes; A will complete after 61 minutes; so the average waiting time will be
31.5 minutes. If all jobs are CPU bound and the same length, then there is no
is, the actions carried out by each process should proceed in the same was as if the
Process model: The state of an inactive process has to be encoded and saved in a
process table so that the process can be resumed when made active.
Context switching: How does one carry out the change from one process to another?
Memory translation: Each process treats the computer's memory as its own private
playground. How can we give each process the illusion that it can reference addresses
in memory as it wants, but not have them step on each other's toes? The trick is by
distinguishing between virtual addresses -- the addresses used in the process code --
and physical addresses -- the actual addresses in memory. Each process is actually
given a fraction of physical memory. The memory management unit translates the
virtual address in the code to a physical address within the user's space. This
Memory management: How does the Operating System assign sections of physical
17
Scheduling: How does the Operating System choose which process to run when?
Let us briefly review some aspects of program behavior in order to motivate the basic
(representing CPU is in use for computation) and white Box represents that CPU is idle
due to some I/O activity. Idealized serial execution of two programs, with no inter-
program idle times, is depicted in Figure 1.2. For comparison purposes, both programs
are assumed to have identical behavior with regard to processor and I/O times and their
relative distributions. As Figure 1.2 suggests, serial execution of programs causes either
the processor or the I/O devices to be idle at some time even if the input job stream is
never empty. One way to attack this problem is to assign some other work to the
Program 1 Program 2
Figure 1.3 illustrates a possible scenario of concurrent execution of the two programs
introduced in Figure 1.2. It starts with the processor executing the first computational
sequence of Program 1. Instead of idling during the subsequent I/O sequence of Program
1, the processor is assigned to the first computational sequence of the Program 2, which
is assumed to be in memory and awaiting execution. When this work is done, the
18
Program 1
Program 2
P1 P2 P1 P2 P1 Time
CPU activity
usually called. With a single processor, parallel execution of programs is not possible,
and at most one program can be in control of the processor at any time. The example
presented in Figure 1.3 achieves 100% processor utilization with only two active
sharing systems found in many university computer centers provide a typical example of
a multiprogramming system.
We are able to classify the universe of operating systems on the idea of many criteria, viz.
Number of at the same time active programs, Number of users operating at the same
time, Number of processors within the computer systems, etc. In the following
19
3.4.1 Batch Operating System
As mentioned earlier, execution typically needs the program, data, and applicable system
sometimes permit very little or no interaction between users and executing programs.
Batch Processing incorporates a larger potential for resource utilization than serial
processing in computer systems serving multiple users. Because of turnaround delays and
Programs that don't need interaction and programs with long execution times is also
served well by a batch operating systems. Payroll, forecasting, statistical analysis, and
sometimes divided into two areas. The resident portion of the operating system for
always occupies one among them, and the other is used to load transient programs for
execution. Once a transient program terminates, a new program is loaded into the same
space of memory.
20
Since at the most one program is in execution at any time, batch systems don't need any
time-critical device management. For this reason, several serial and I/O and standard
batch operating systems use simple, program controlled method of I/O described later.
The shortage of contention for I/O devices makes their allocation and de-allocation
trivial.
Batch systems typically give straightforward kinds of file management. Since access to
files is also serial, very little protection and no concurrency management of file access in
needed.
A multiprogramming operating system is one that allows end-users to run more than one
program at a time. The development of such a system, the first type to allow this
technology works by allowing the central processing unit (CPU) of a computer to switch
between two or more running tasks when the CPU is idle. A multiprogramming system
permits multiple programs to be loaded into memory and execute the programs
system throughput and resource utilization relative to batch and serial processing. This
computer system among a multitude of active programs. Such operating systems usually
have the prefix multi in their names, such as multitasking or multiprogramming as shown
in figure 1.5.
21
Figure 1.5: Memory layout in Multiprogramming
It allows more than one program to run concurrently. The ability to execute more than
one task at the same time is called as multitasking. An instance of a program in execution
Multitasking is often coupled with hardware and software support for memory protection
in order to prevent erroneous processes from corrupting address spaces and behavior of
other resident processes. The terms multitasking and multiprocessing are often used
interchangeably, although multiprocessing sometimes implies that more than one CPU is
involved. In multitasking, only one CPU is involved, but it switches from one program to
another so quickly that it gives the appearance of executing all of the programs at the
same time. There are two basic types of multitasking: preemptive and cooperative. In
preemptive multitasking, the Operating System parcels out CPU time slices to each
program. In cooperative multitasking, each program can control the CPU for as long as it
22
needs it. If a program is not using the CPU, however, it can allow another program to use
Multiprogramming operating systems usually support multiple users, in which case they
are also called multi-user systems. Multi-user operating systems provide facilities for
general, multiprogramming implies multitasking, but multitasking does not imply multi-
system resources, including processor, memory, and I/O devices. Multitasking operation
without multi-user support can be found in operating systems of some advanced personal
23
systems provide the usual complement of other system services that may qualify them as
3.4.5 Multithreading
programmer must carefully design the program in such a way that all the threads can run
aided design (CAD) and text-processing systems fit in to this type. One among the first
response time. Giving the illusion to every user of getting a machine to oneself, time-
sharing systems typically arrange to give equitable sharing of common resources. For
example, once the system is loaded, users with additional rigorous processing needs are
systems use time-slicing (round robin) scheduling. In this approach, programs are
executed with rotating priority that increases during waiting and decreases after the
executing longer than the system-defined time slice is interrupted by the operating system
and placed at the end of the queue of waiting programs. This mode of operation typically
24
Memory management in time-sharing systems provides for separation and protection of
conserve memory and probably to exchange data between programs. Being executed on
behalf of various users, programs in time-sharing systems typically don’t have a lot of
I/O management in time-sharing systems should be subtle enough to deal with multiple
users and devices. However, due to the comparatively slow speeds of terminals and
preserves system integrity and provides good performance. Given the chance of
simultaneously and probably conflicting tries to access files, file management in a time-
sharing system should give protection and access control. This task is commonly
combined by the necessity for files to be shared among certain users or categories of
users.
Real time operating systems are used in environments wherever a large number of events,
principally external to the computer system, should be accepted and processed in a short
time or inside certain deadlines. Such applications comprise industrial control, telephone
switching equipment, flight control, and real-time simulations. Real-time systems are also
A primary objective of real time systems is to supply fast event-response times, and so
meet the scheduling deadlines. User convenience and resource utilization are of
secondary concern to real time system designers. It's not unusual for a real time system to
25
be expected to process bursts of thousands of interrupts per second while not missing one
event. Such needs sometimes cannot be met by multi-programming alone, and real time
operating systems sometimes admit some definite policies and techniques for doing their
job.
time systems. Basically, a separate process is charged with handling one external event.
The process is activated upon occurrence of the related event that is commonly signaled
independently of each other. Every process is appointed an explicit level of priority that
corresponds to the relative importance of the event that it services. The processor is often
allotted to the highest-priority process among those that are ready to execute. Higher-
Memory management in real time systems is relatively less demanding than in different
varieties of multiprogramming systems. The first reason for this can be that a lot of
processes permanently reside in memory so as to supply fast response time. Unlike, say,
time-sharing, the process population in real time systems is fairly static, and there's
relatively very little moving of programs between primary and secondary storage. On the
opposite hand, processes in real time systems tend to cooperate closely, so necessitating
26
interrupt management and I/O buffering, real time operating systems typically give
File management is sometimes found solely in larger installations of real time systems. In
fact, some embedded real time systems, like an onboard automotive controller may not
even have any secondary storage. However, where provided, file management of
controller, might not even have any memory device. However, wherever provided, file
management of real time systems should satisfy a lot of an equivalent needs as found in
access control. The first objective of file management in real time systems is sometimes
Differing kinds of operating systems are optimized, or at least largely geared toward
environment might not precisely match any of the described molds. As an example, each
For this reason, some commercial operating systems provide a combination of described
services. For example, a time-sharing system may support interactive users and also
practice is to assign low priority to batch jobs and thus execute batched programs only
when the processor would otherwise be idle. For this reason, some commercial operating
27
systems give a combination of described services. For instance, a time-sharing system
could support interactive users and incorporate a full-fledged batch monitor. This permits
interactive programs. The common follow is to assign low priority to batch jobs and so
execute batched programs only the processor would otherwise be idle. In different words,
helpful service of its own. Similarly, some time-critical events, like receipt and
transmission of network data packets, is also handled in real time fashion on systems that
protocols.
and provides a virtual machine abstraction to its users. The key objective of a distributed
hidden from users and application programs unless they explicitly demand otherwise.
Distributed operating systems usually provide the means for system-wide sharing of
resources, such as computational capacity, files, and I/O devices. In addition to typical
operating-system services provided at each node for the benefit of local clients, a
28
with remote processes, and distribution of computations. The added services necessary
for pooling of shared system resources include global naming, distributed file system, and
4. SUMMARY
environments provided by the bare machine, serial processing, including batch and
multiprogramming.
On the basis of their attributes and design objectives, different types of operating systems
were defined and characterized with respect to scheduling and management of memory,
devices, and files. The primary concerns of a time-sharing system are equitable sharing of
mostly concerned with responsive handling of external events generated by the controlled
system. Distributed operating systems provide facilities for global naming and accessing
Typical services provided by an operating system to its users were presented from the
point of view of command-language users and system-call users. In general, system calls
provide functions similar to those of the command language but allow finer gradation of
control.
Operating System Concepts, 5th Edition, Silberschatz A., Galvin P.B., John Wiley
& Sons.
29
Systems Programming & Operating Systems, 2nd Revised Edition, Dhamdhere
Operating Systems, Madnick S.E., Donovan J.T., Tata McGraw Hill Publishing
2000.
Operating Systems, Harris J.A., Tata McGraw Hill Publishing Company Ltd., New
Delhi, 2002.
30
CS-DE-15
LESSON NO. 2
STRUCTURE
1. Introduction
2. Objective
3. Presentation of Contents
3.1.4 Communication
3.1.7 Protection
4. Summary
1
5. Suggested Readings / Reference Material
1. INTRODUCTION
An operating system gives the environment within which various types of programs are executed.
essential that the goals of the system be well defined before the design starts. One view of operating
system focuses on the services that the system provides. In this lesson, we look at various aspects of
operating systems. We consider what services an operating system provides and how they are
provided.
2. OBJECTIVES
In this lesson, the services provided by an operating system to users, processes, and other systems
have been explained. The concept of operating system call and its different types has been
described. The concept and use of system programs has also been elaborated.
3. PRESENTATION OF CONTENTS
An operating system provides definite services to programs and to the users of those programs. The
services provided differ from one operating system to another. These operating system services are
provided for the ease of the programmer, to make the programming job easier. The services
Operating system handles many kinds of activities from user programs to system programs like
printer spooler, name servers, file server etc. Each of these activities is encapsulated as a process.
A process includes the complete execution context (code to execute, data to manipulate,
2
registers, OS resources in use). Following are the major activities of an operating system with
I/O subsystem is comprised of I/O devices and their corresponding driver software. Drivers hide
the peculiarities of specific hardware devices from the user as the device driver knows the
peculiarities of the specific device. Operating System manages the communication between user
and device drivers. Following are the major activities of an operating system with respect to I/O
Operation:
I/O operation means read or write operation with any file or any specific I/O device.
Operating system provides the access to the required I/O device when required.
A file represents a collection of related information. Computer can store files on the disk
(secondary storage), for long term storage purpose. Few examples of storage media are magnetic
tape, magnetic disk and optical disk drives like CD, DVD. Each of these media has its own
3
properties like speed, capacity, and data transfer rate and data access methods. A file system is
normally organized into directories for easy navigation and usage. These directories may contain
files and other directions. Following are the major activities of an operating system with respect
to file management:
The operating system gives the permission to the program for operation on file.
3.1.4 Communication
In case of distributed systems, which are a collection of processors that do not share memory,
Multiple processes communicate with one another through communication lines in the network.
OS handles routing and connection strategies, and the problems of contention and security.
Following are the major activities of an operating system with respect to communication:
Both processes can be on the one computer or on different computer, but are
or by Message Passing.
4
3.1.5 Error handling
Error can occur anytime and anywhere. Error may occur in CPU, in I/O devices or in the
memory hardware. Following are the major activities of an operating system with respect to error
handling:
In case of multi-user or multi-tasking environment, resources such as main memory, CPU cycles
and files storage are to be allocated to each user or job. Following are the major activities of an
3.1.7 Protection
Considering computer systems having multiple users, the concurrent execution of multiple
processes, then the various processes must be protected from each another's activities.
Protection refers to mechanism or a way to control the access of programs, processes, or users to
the resources defined by computer systems. Following are the major activities of an operating
OS ensures that external I/O devices are protected from invalid access attempts.
5
OS provides authentication feature for each user by means of a password.
Almost all operating systems have a user interface (UI). This interface can take several forms. One
is a command-line interface (CLI), which uses text commands and a method for entering them (say,
a program to allow entering and editing of commands). Another is a batch interface, in which
commands and directives to control those commands are entered into files, and those files are
executed. Most commonly a graphical user interface (GUI) is used. Here, the interface is a window
system with a pointing device to direct I/O, choose from menus, and make selections and a keyboard
to enter text.
System calls allow user-level processes to request some services from the operating system which
process itself is not allowed to do. In handling the trap, the operating system will enter in the kernel
mode, where it has access to privileged instructions, and can perform the desired service on the
behalf of user-level process. It is because of the critical nature of operations that the operating
system itself does them every time they are needed. For example, for I/O a process involves a
system call telling the operating system to read or write particular area and this request is satisfied
by the operating system. These calls are generally available as routines written in C and C++. Before
we discuss how an operating system makes system calls available, let’s first use an example to
illustrate how system calls are used: writing a simple program to read data from one file and copy
them to another file. The first input that the program will need is the names of the two files: the
input file and the output file. These names can be specified in many ways, depending on the
operating system design. One approach is for the program to ask the user for the names of the two
files. In an interactive system, this approach will require a sequence of system calls, first to write a
6
prompting message on the screen and then to read from the keyboard the characters that define the
two files. On mouse-based and icon-based systems, a menu of file names is usually displayed in a
window. The user can then use the mouse to select the source name, and a window can be opened
for the destination name to be specified. This sequence requires many I/O system calls.
Once the two file names are obtained, the program must open the input file and create the output
file. Each of these operations requires another system call. There are also possible error conditions
for each operation. When the program tries to open the input file, it may find that there is no file of
that name or that the file is protected against access. In these cases, the program should print a
message on the console (another sequence of system calls) and then terminate abnormally (another
system call). If the input file exists, then we must create a new output file. We may find that there is
already an output file with the same name. This situation may cause the program to abort (a system
call), or we may delete the existing file (another system call) and create a new one (another system
call). Another option, in an interactive system, is to ask the user (via a sequence of system calls to
output the prompting message and to read the response from the terminal) whether to replace the
existing file or to abort the program. Now that both files are set up, we enter a loop that reads from
the input file (a system call) and writes to the output file (another system call). Each read and write
must return status information regarding various possible error conditions. On input, the program
may find that the end of the file has been reached or that there was a hardware failure in the read
(such as a parity error).The write operation may encounter various errors, depending on the output
device (no more disk space, printer out of paper, and so on).Finally, after the entire file is copied, the
program may close both files(another system call), write a message to the console or window (more
system calls), and finally terminate normally (the final system call). As we can see, even simple
7
programs may make heavy use of the operating system. Frequently, systems execute thousands of
system calls per second. This system calls sequence is shown in Figure 2.1.
Accept input
Accept input
Terminate normally
8
3.2.1 Types of System Calls
System calls can be grouped roughly into five major categories: process control, file manipulation,
device manipulation, information maintenance, and communication. Figure 2.2 summarizes the
Process control
end, abort
load, execute
File management
open, close
Device management
9
logically attach or detach devices
Information maintenance
Communications
Process Control: A running program needs to be able to halt its execution either normally
(end) or abnormally (abort). If a system call is made to terminate the currently running program
abnormally, or if the program runs into a problem and causes an error trap, a dump of memory is
sometimes taken and an error message generated. The dump is written to disk and may be
examined by a debugger (a system program designed to aid the programmer in finding and
correcting bugs) to determine the cause of the problem. Under either normal or abnormal
circumstances, the operating system must transfer control to the command interpreter. The
command interpreter then reads the next command. In an interactive system, the command
interpreter simply continues with the next command; it is assumed that the user will issue an
appropriate command to respond to any error. In a GUI system, a pop-up window might alert the
user to the error and ask for guidance. In a batch system, the command interpreter usually
10
terminates the entire job and continues with the next job. For example the standard C library
provides a portion of the system-call interface for many versions of UNIX and Linux. As an
example, let us assume a C program invokes the printf() statement. The C library intercepts this
call and invokes the necessary system call(s) in the operating system - in this instance, the
write() system call. The C library takes the value returned by write() and passes it back to the
user program. This is shown in Figure 2.3. Some systems allow control cards to indicate special
recovery actions in case an error occurs. A control card is a batch system concept. It is a
command to manage the execution of a process. If the program discovers an error in its input and
#include <stdio.h>
int main()
{ printf("hello");
return 0;
}
User mode
Standard C Library
Kernel mode
More severe errors can be indicated by a higher-level error parameter. It is then possible to combine
normal and abnormal termination by defining a normal termination as an error at level 0. The
command interpreter or a following program can use this error level to determine the next action
automatically.
11
A process or job executing one program may want to load and execute another program. This
feature allows the command interpreter to execute a program as directed by, for example, a user
command, the click of a mouse, or a batch command. An interesting question is where to return
control when the loaded program terminates. This question is related to the problem of whether the
existing program is lost, saved, or allowed to continue execution concurrently with the new
program. If control returns to the existing program when the new program terminates, we must save
the memory image of the existing program; thus, we have effectively created a mechanism for one
program to call another program. If both programs continue concurrently, we have created a new job
or process to be multi-programmed. Often, there is a system call specifically for this purpose.
If we create a new job or process, or perhaps even a set of jobs or processes, we should be able to
control its execution. This control requires the ability to determine and reset the attributes of a job or
process, including the job's priority, its maximum allowable execution time, and so on (get process
attributes and set process attributes). We may also want to terminate a job or process that we created
(terminate process) if we find that it is incorrect or is no longer needed. Having created new jobs or
processes, we may need to wait for them to finish their execution. We may want to wait for a certain
amount of time to pass (wait time); more probably, we will want to wait for a specific event to occur
(wait event). The jobs or processes should then signal when that event has occurred (signal event).
Another set of system calls is helpful in debugging a program. Many systems provide system calls to
dump memory. This provision is useful for debugging. A program trace lists each instruction as it is
executed; it is provided by fewer systems. Even microprocessors provide a CPU mode known as
single step, in which a trap is executed by the CPU after every instruction. The trap is usually caught
by a debugger. Many operating systems provide a time profile of a program to indicate the amount
of time that the program executes at a particular location or set of locations. A time profile requires
12
either a tracing facility or regular timer interrupts. At every occurrence of the timer interrupt, the
File Management: We identify several common system calls dealing with files; we first
need to be able to create and delete files. Either system call requires the name of the file and perhaps
some of the file's attributes. Once the file is created, we need to open it and to use it. We may also
read, write, or reposition (rewinding or skipping to the end of the file, for example). Finally, we need
to close the file, indicating that we are no longer using it. We may need these same sets of
operations for directories if we have a directory structure for organizing files in the file system. In
addition, for either files or directories, we need to be able to determine the values of various
attributes and perhaps to reset them if necessary. File attributes include the file name, a file type,
protection codes, accounting information, and so on. At least two system calls, get file attribute and
set file attribute, are required for this function. Some operating systems provide many more calls,
such as calls for file move and copy. Others might provide an API that performs those operations
using code and other system calls, and others might just provide system programs to perform those
tasks. If the system programs are callable by other programs, then each can be considered an API by
Device Management: A process may need several resources to execute main memory, disk
drives, access to files, and so on. If the resources are available, they can be granted, and control can
be returned to the user process. Otherwise, the process will have to wait until sufficient resources are
available. The various resources controlled by the operating system can be thought of as devices.
Some of these devices are physical devices (for example, tapes), while others can be thought of as
abstract or virtual devices (for example, files). If there are multiple users of the system, the system
may require us to first request the device, to ensure exclusive use of it. After we are finished with
13
the device, we release it. These functions are similar to the open and close system calls for files.
Other operating systems allow unmanaged access to devices. Once the device has been requested
(and allocated to us), we can read, write, and (possibly) reposition the device, just as we can with
files. In fact, the similarity between I/O devices and files is so great that many operating systems,
including UNIX, merge the two into a combined file-device structure. In this case, a set of system
calls is used on files and devices. Sometimes, I/O devices are identified by special file names,
directory placement, or file attributes. The UI can also make files and devices appear to be similar,
even though the underlying system calls are dissimilar. This is another example of the many design
Information Maintenance: Many system calls exist simply for the purpose of transferring
information between the user program and the operating system. For example, most programs.
Systems have a system call to return the current time and date. Other system calls may return
information about the system, such as the number of current users, the version number of the
operating system, the amount of free memory or disk space, and so on. In addition, the operating
system keeps information about all its processes, and system calls are used to access this
information. Generally, calls are also used to reset the process information (get process attributes
Communication: There are two common models of inter process communication: the
message passing model and the shared-memory model. In the message-passing model, the
communicating processes exchange messages with one another to transfer information. Messages
can be exchanged between the processes either directly or indirectly through a common mailbox.
Before communication can take place, a connection must be opened. The name of the other
communicator must be known, be it another process on the same system or a process on another
14
computer connected by a communications network. Each computer in a network has a host name by
which it is commonly known. A host also has a network identifier, such as an IP address. Similarly,
each process has a process name, and this name is translated into an identifier by which the
operating system can refer to the process. The get host id and get process id system calls do this
translation. The identifiers are then passed to the general purpose open and close calls provided by
the file system or to specific open connection and close connection system calls, depending on the
system's model of communication. The recipient process usually must give its permission for
communication to take place with an accept connection call. Most processes that will be receiving
connections are special-purpose daemons, which are systems programs provided for that purpose.
They execute a wait for connect ion call and are awakened when a connection is made. The source
of the communication, known as the client, and the receiving daemon, known as a server, then
exchange messages by using read message and write message system calls. The close connection
call terminates the communication. In the shared-memory model, processes use shared memory
creates and shared memory attaches system calls to create and gain access to regions of memory
owned by other processes. Recall that, normally, the operating system tries to prevent one process
from accessing another process's memory. Shared memory requires that two or more processes
agree to remove this restriction. They can then exchange information by reading and writing data in
the shared areas. The form of the data and the location are determined by the processes and are not
under the operating system's control. The processes are also responsible for ensuring that they are
not writing to the same location simultaneously. Message passing is useful for exchanging smaller
amounts of data, because no conflicts need be avoided. It is also easier to implement than is shared
15
3.2.2 Examples of System Calls
System calls are kernel level service routines for implementing basic operations performed by the
operating system. Below are mentioned some of several generic system calls that most operating
systems provide.
In response to the CREATE call, the operating system creates a new process with the
specified or default attributes and identifier. As pointed out earlier, a process cannot create itself-
because it would have to be running in order to invoke the OS, and it cannot run before being
created. So a process must be created by another process. In response to the CREATE call, the
operating system obtains a new PCB from the pool of free memory, fills the fields with provided
and/or default parameters, and inserts the PCB into the ready list-thus making the specified process
eligible to run. Some of the parameters definable at the process-creation time include:
Priority
Typical error returns, implying that the process was not created as a result of this call, include:
wrongID (illegal, or process already active), no space for PCB (usually transient; the call may be
retries later), and calling process not authorized to invoke this function. Ada uses the INITIATE
statement to create and activate one or more tasks (processes). When several tasks are created with a
16
DELETE (process ID);
The DELETE service is also called DESTROY, TERMINATE, or EXIT. Its invocation
causes the OS to destroy the designated process and remove it from the system. A process may
delete itself or another process. The operating system reacts by reclaiming all resources allocated to
the specified process (attached I/O devices, memory), closing files opened by or for the process, and
performing whatever other housekeeping is necessary. Following this process, the PCB is removed
from its place of residence in the list and is returned to the free pool. This makes the designated
process dormant. The DELETE service is normally invoked as a part of orderly program
termination.
To relieve users of the burden and to enhance probability of programs across different
environments, many compilers compile the last END statement of a main program into a DELETE
system call.
provided none of their spawned processes is active. Operating system designers differ in their
attitude toward allowing one process to terminate others. The issue here is none of convenience and
efficiency versus system integrity. Allowing uncontrolled use of this function provides a
malfunctioning or a malevolent process with the means of wiping out all other processes in the
system. On the other hand, terminating a hierarchy of processes in a strictly guarded system where
each process can only delete itself, and where the parent must wait for children to terminate first,
could be a lengthy operation indeed. The usual compromise is to permit deletion of other processes
but to restrict the range to the members of the family, to lower-priority processes only, or to some
17
Possible error returns from the DELETE call include: a child of this process is active (should
terminate first), wrongID (the process does not exist), and calling process not authorized to invoke
this function.
Abort (processID);
itself, the most frequent use of this call is for involuntary terminations, such as removal of a
malfunctioning process from the system. The operating system performs much the same actions as
in DELETE, except that it usually furnishes a register and memory dump, together with some
information about the identity of the aborting process and the reason for the action. This information
analyzer utility. Obviously, the issue of restricting the authority to abort other processes, discussed
in relation to the DELETE, is even more pronounced in relation to the ABORT call.
Error returns for ABORT are practically the same as those listed in the discussion of the
DELETE call. The Ada language includes the ABORT statement that forcefully terminates one or
more processes. Other than the usual scope and module-boundary rules, Ada imposes no special
FORK/JOIN
Another method of process creation and termination is by means of the FORK/JOIN pair,
originally introduced as primitives for multiprocessor systems. The FORK operation is used to split
a sequence of instructions into two concurrently executable sequences. After reaching the identifier
specified in FORK, a new process (child) is created to execute one branch of the forked code while
the creating (parent) process continues to execute the other. FORK usually returns the identity of the
child to the parent process, and the parent can use that identifier to designate the identity of the child
18
whose termination it wishes to await before invoking a JOIN operation. JOIN is used to merge the
two sequences of code divided by the FORK, and it is available to a parent process for
The relationship between processes created by FORK is rather symbiotic in the sense that
they execute from a single segment of code, and that a child usually initially obtains a copy of the
procedure calls, where both the caller and the called procedure execute concurrently following an
invocation. The JOIN primitive is used to synchronize the caller with the termination of the named
synchronous procedure-call mechanism in Mesa, which is very similar to an ordinary procedure call
in Pascal or in Algol.
SUSPEND (processKD);
The SUSPEND service is called SLEEP or BLOCK in some systems. The designated
process is suspended indefinitely and placed in the suspended state. It does, however, remain in the
system. A process may suspend itself or another process when authorized to do so by virtue of its
level of privilege, priority, or family membership. When the running process suspends itself, it in
effect voluntarily surrenders control to the operating system. The operating system responds by
inserting the target process's PCB into the suspended list and updating the PCB state field
accordingly.
Suspending a suspended process usually has no effect, except in systems that keep track of
the depth of suspension. In such systems, a process must be resumed at least as many times as if was
suspended in order to become ready. To implement this feature, a suspend-count field has to be
19
maintained in each PCB. Typical error returns include: process already suspended, wrongID, and
RESUME (processID)
The RESUME service is called WAKEUP is some systems. This call resumes the target
process, which is presumably suspended. Obviously, a suspended process cannot resume itself,
because a process must be running to have its OS call processed. So a suspended process depends on
a partner process to issue the RESUME. The operating system responds by inserting the target
process's PCB into the ready list, with the state updated. In systems that keep track of the depth of
suspension, the OS first increments the suspend count, moving the PCB only when the count reaches
zero.
form of inter-process synchronization. It is often used in systems that do not support exchange of
signals. Error returns include: process already active, wrongID, and caller not authorized.
The system call DELAY is also known as SLEEP. The target process is suspended for the
duration of the specified time period. The time may be expressed in terms of system clock ticks that
are system-dependent and not portable, or in standard time units such as seconds and minutes. A
process may delay itself or, optionally, delay some other process.
The actions of the operating system in handling this call depend on processing interrupts
from the programmable interval timer. The timed delay is a very useful system call for
implementing time-outs. In this application a process initiates an action and puts itself to sleep for
the duration of the time-out. When the delay (time-out) expires, control is given back to the calling
process, which tests the outcome of the initiated action. Two other varieties of timed delay are cyclic
20
rescheduling of a process at given intervals (e.g,. running it once every 5 minutes) and time-of-day
scheduling, where a process is run at a specific time of the day. Examples of the latter are printing a
shift log in a process-control system when a new crew is scheduled to take over, and backing up a
database at midnight.
The error returns include: illegal time interval or unit, wrongID, and called not authorized. In
Ada, a task may delay itself for a number of system clock ticks (system-dependent) or for a specified
time period using the pre-declared floating-point type TIME. The DELAY statement is used for this
purpose.
current values of the process attributes, or their specified subset, from the PCB. This is normally the
only way for a process to find out what its current attributes are, because it neither knows where its
PCB is nor can access the protected OS space where the PCBs are usually kept.
This call may be used to monitor the status of a process, its resource usage and accounting
information, or other public data stored in a PCB. The error returns include: no such attribute,
wrongID, and caller not authorized. In Ada, a task may examine the values of certain task attributes
by means of reading the pre-declared task attribute variables, such as T'ACTIVE, T'CALLABLE,
system call. Obviously, this call is not implemented in systems where process priority is static.
process's ability to compete for system resources. The idea is that priority of a process should rise
21
and fall according to the relative importance of its momentary activity, thus making scheduling more
responsive to changes of the global system state. Low-priority processes may abuse this call, and
processes competing with the operating system itself may corrupt the whole system. For these
reasons, the authority to increase priority is usually restricted to changes within a certain range. For
example, maximum may be specified, or the process may not exceed its parent's or group priority.
Although changing priorities of other processes could be useful, most implementations restrict the
The error returns include: caller not authorized for the requested change and wrong ID. In Ada, a
task may change its own priority by calling the SET_PRIORITY procedure, which is pre-declared in
the language.
Another aspect of a modern system is the collection of system programs. At the lowest level is
hardware. Next is operating system, then the system programs, and finally the application programs.
System programs provide a convenient environment for program development and execution. Some
of them are simply user interfaces to system calls; others are considerably more complex. They can
File management: These programs create, delete, copy, rename, print, dump, list, and generally
Status information: Some programs simply ask the system for the date, time, amount of available
memory or disk space, number of users, or similar status information. Others are more complex,
providing detailed performance, logging, and debugging information. Typically, these programs
format and print the output to the terminal or other output devices or files or display it in a window
22
the GUI. Some systems also support a registry, which is used to store and retrieve configuration
information.
File modification: Several text editors may be available to create and modify the content of files
stored on disk or other storage devices. There may also be special commands to search contents of
programming languages (such as C, C++, Java, Visual Basic, and PERL) are often provided to the
Program loading and execution: Once a program is assembled or compiled, it must be loaded into
memory to be executed. The system may provide absolute loaders, reloadable loaders, linkage
editors, and overlay loaders. Debugging systems for either higher-level languages or machine
Communications: These programs provide the mechanism for creating virtual connections among
processes, users, and computer systems. They allow users to send messages to one another's screens,
to browse web pages, to send electronic-mail messages, to log in remotely, or to transfer files from
one machine to another. In addition to systems programs, most operating systems are supplied with
programs that are useful in solving common problems or performing common operations. Such
programs include web browsers, word processors and text formatters, spreadsheets, database
systems, compilers, plotting and statistical-analysis packages, and games. These programs are
known as system utilities or application programs. The view of the operating system seen by most
users is defined, by the application and system programs, rather than by the actual system calls.
When his computer is running the Mac OS X operating system, a user might see the GUI, featuring
a mouse and windows interface. Alternatively, or even in one of the windows, he might have a
23
command-line UNIX shell. Both use the same set of system calls, but the system calls look different
4. SUMMARY
Operating systems provide a number of services. At the lowest level, system calls allow a running
program to make requests from the operating system directly. System programs are provided to
satisfy many common user requests. The types of requests vary according to level. The system-call
level must provide the basic functions, such as process control and file and device manipulation.
Higher-level requests, satisfied by the command interpreter or system programs, are translated into a
sequence of system calls. System services can be classified into several categories: program control,
status requests, and I/O requests. Program errors can be considered implicit requests for service.
Operating System Concepts, 5th Edition, Silberschatz A., Galvin P.B., John Wiley & Sons.
Systems Programming & Operating Systems, 2nd Revised Edition, Dhamdhere D.M., Tata
Operating Systems, Madnick S.E., Donovan J.T., Tata McGraw Hill Publishing Company
Operating Systems-A Modern Perspective, Gary Nutt, Pearson Education Asia, 2000.
Operating Systems, Harris J.A., Tata McGraw Hill Publishing Company Ltd., New Delhi,
2002.
The services and functions provided by an operating system can be divided into two main
categories. Briefly describe the two categories and discuss how they differ.
24
List five services provided by an operating system that are designed to make it more
convenient for users to use the computer system. In what cases it would be impossible for
What are the five major activities of an operating system with regard to file management?
What are the advantages and disadvantages of using the same system call interface for
25
CS-DE-15
CPU SCHEDULING
LESSON NO 3
STRUCTURE
1. Introduction
2. Objectives
3. Presentation of Contents
3.1 Process
1
3.5.1 First-Come, First-Served (FCFS) Scheduling
4. Summary
1. INTRODUCTION
distinction between a program and the activity of executing a program. The former is
merely a static set of directions; the latter is a dynamic activity whose properties change
current status of the activity, called the process state. This state includes the current
position in the program being executed (the value of the program counter) as well as the
values in the other CPU registers and the associated memory cells. Roughly speaking, the
process state is a snapshot of the machine at that time. At different times during the
2
execution of a program (at different times in a process) different snapshots (different
processes are given access to system resources. A scheduler is an OS module that selects
the next job to be admitted into the system and the next process to run. The need for a
scheduling algorithm arises from the requirement for most modern systems to perform
multitasking (execute more than one process at a time) and multiplexing (transmit
2. OBJECTIVES
Objectives of this lesson are to know how about the process and process states. Another
operating systems. The scheduler is concerned mainly with throughput, latency and
waiting time. After a discussion of the various performance criteria behind the design of
3. PRESENTATION OF CONTENTS
3.1 PROCESS
The notion of process is central to the understanding of operating systems. There are
quite a few definitions presented in the literature, but no "perfect" definition has yet
appeared.
3
The term "process" was first used by the designers of the MULTICS in 1960’s. Since
then, the term process is used somewhat interchangeably with 'task' or 'job'. The process
A program in Execution.
An asynchronous activity.
As we can see from above that there is no universally agreed upon definition, but the
definition "Program in Execution" seem to be most frequently used. Now that we agreed
upon the definition of process, the question is “what is the relation between process and
program?” In the following discussion we point out some of the differences between
process and program. Process is not the same as program rather a process is more than a
program code. A process is an active entity as oppose to program which consider being a
executable code, process-specific data (input and output), a call stack (to keep
4
track of active subroutines and/or other events), and a heap to hold intermediate
Operating system descriptors of resources that are allocated to the process, such as
file descriptors (Unix terminology) or handles (Windows), and data sources and
sinks.
Security attributes, such as the process owner and the process’s set of permissions
(allowable operations).
Processor state, such as the content of registers, physical memory addressing, etc.
The state is typically stored in computer registers when the process is executing,
The Process Stack (SP) which typically contains temporary data such as subroutine
A process is the unit of work in a system. In process model, all software on the computer
5
everything necessary to resume the process execution if it is somehow put aside
temporarily.
At any given point in time, while the program is executing, this process can be uniquely
executed.
Memory pointers: Includes pointers to the program code and data associated
with this process, plus any memory blocks shared with other processes.
Context data: These are data that are present in registers in the processor while
I/O status information: Includes outstanding I/O requests, I/O devices (e.g., tape
drives) assigned to this process, a list of files in use by the process, and so on.
Accounting information: May include the amount of processor time and clock
6
(i) Code for the program.
A process goes through a series of discrete process states. The following typical process
states are possible on computer systems of all kinds. In most of these states, processes are
(a) New State: The process being created. When a user request for a Service from the
System, then the System will first initialize the process or the System will call it an
initial Process. So every new Operation which is requested to the System is known as
(b) Running State: A process is said to be running if process actually using the CPU at
that particular instant. A process moves into the running state when it is chosen for
execution. The process's instructions are executed by one of the CPUs (or cores) of
the system. There is at most one running process per CPU or core. A process can run
Processes in kernel mode can access both: kernel and user addresses.
privileged instructions.
Various instructions (such as I/O instructions and halt instructions) are privileged
User mode
Processes in user mode can access their own instructions and data but not kernel
system is in user mode. However, when a user application requests a service from
the operating system (via a system call), the system must transition from user to
o There is an isolated virtual address space for each process in user mode.
o User mode ensures isolated execution of each process so that it does not
8
(c) Blocked State: A process is said to be blocked if it is waiting for some event to
happen before it can proceed. A process may be blocked due to various reasons such
(d) Ready State: A process is said to be ready if it use a CPU as soon as it is available. A
"ready" or "waiting" process has been loaded into main memory and is awaiting
execution on a CPU (to be context switched onto the CPU by the dispatcher, or short-
term scheduler). There may be many "ready" processes at any one point of the
executing at any one time, and all other "concurrently executing" processes will be
waiting for execution. A ready queue or run queue is used in computer scheduling.
the same time. However, the CPU is only capable of handling one process at a time.
Processes that are ready for the CPU are kept in a queue for "ready" processes. Other
processes that are waiting for an event to occur, such as loading information from a
hard drive or waiting on an internet connection, are not in the ready queue. The run
queue may contain priority values for each process, which will be used by the
scheduler to determine which process to run next. To ensure each program has a fair
share of resources, each one is run for some time period (quantum) before it is paused
9
(e) Terminated state: The process has finished execution. A process may be terminated,
either from the "running" state by completing its execution or by explicitly being
killed. In either of these cases, the process moves to the "terminated" state.
Two additional states are available for processes in systems that support virtual memory.
In both of these states, processes are "stored" on secondary memory (typically a hard
disk).
This is also called suspended and waiting. In systems that support virtual memory, a
process may be swapped out, that is, removed from main memory and placed on external
storage by the scheduler. From here the process may be swapped back into the waiting
state.
This is also called suspended and blocked. Processes that are blocked may also be
swapped out. In this event the process is both swapped out and blocked, and may be
swapped back in again under the same circumstances as a swapped out and waiting
process (although in this case, the process will move to the blocked state, and may still be
There are three distinct types of schedulers, a long-term scheduler (also known as an
10
a short-term scheduler. The scheduler is an operating system module that selects the next
jobs to be admitted into the system and the next process to run. Figure 3.1 shows the
possible traversal paths of jobs and programs through the components and queues,
depicted by rectangles, of a computer system. The primary places of action of the three
types of schedulers are marked with down-arrows. As shown in Figure 3.1, a submitted
batch job joins the batch queue while waiting to be processed by the long-term scheduler.
Whenever the CPU becomes idle, it is the job of the CPU Scheduler (the short-term
scheduler) to select another process from the ready queue to run next. After becoming
suspended, the running process may be removed from memory and swapped out to
secondary storage. Such processes are subsequently admitted to main memory by the
scheduler.
Suspended and
Medium Term Scheduler Swapped
CPU
Batch Ready Exit
Batch Queue
Jobs Queue Exit
11
3.2.1 The long-term scheduler
admitted to the ready queue (in the Main Memory). The long-term scheduler, when
present, works with the batch queue and selects the next batch job to be executed. Batch
is usually reserved for resource-intensive (processor time, memory, special I/O devices),
low-priority programs that may be used as fillers to keep the system resources busy
during periods of low activity of interactive jobs. As pointed out earlier, batch jobs
contain all necessary data and commands for their execution. Batch jobs usually also
jobs, such as processor-bound and I/O-bound, to the short-term scheduler. Thus, this
scheduler dictates what processes are to run on a system, and the degree of concurrency
to be supported at any one time i.e., whether a high or low amount of processes are to be
executed concurrently, and how the split between input output intensive and CPU
the degree of multiprogramming. For example, when the processor utilization is low, the
scheduler may admit more jobs to increase the number of processes in a ready queue, and
with it the probability of having some useful work awaiting processor allocation.
Conversely, when the utilization factor becomes high as reflected in the response time,
the long-term scheduler may opt to reduce the rate of batch-job admission accordingly. In
modern operating systems, this is used to make sure that real time processes get enough
12
CPU time to finish their tasks. Without proper real time scheduling, modern GUIs would
seem sluggish. As a result of the relatively infrequent execution and the availability of an
estimate of its workload's characteristics, the long-term scheduler may incorporate rather
complex and computationally intensive algorithms for admitting jobs into the system. In
Scheduler temporarily removes processes from main memory and places them on
secondary memory (such as a disk drive) or vice versa. This is commonly referred to as
"swapping out" or "swapping in" (also incorrectly as "paging out" or "paging in"). The
medium-term scheduler may decide to swap out a process which has not been active for
some time, or a process which has a low priority, or a process which is page
free up main memory for other processes, swapping the process back in later when more
memory is available, or when the process has been unblocked and is no longer waiting
for a resource. In practice, the main-memory capacity may impose a limit on the number
of active processes in the system. When a number of those processes become suspended,
the remaining supply of ready processes in systems where all suspended processes remain
resident in memory may become reduced to a level that impairs functioning of the short-
term scheduler by leaving it few or no options for selection. In systems with no support
for virtual memory, moving suspended processes to secondary storage may alleviate this
problem.
13
In the system depicted in Figure 3.1, a portion of the suspended processes is
assumed to be swapped out. The remaining processes are assumed to remain in memory
It has little to do while a process remains suspended. However, once the suspending
amount of main memory, and swap the process in and make it ready. To work properly,
the medium-term scheduler must be provided with information about the memory
because the actual size of the process may be recorded at the time of swapping and stored
In many systems today (those that support mapping virtual address space to
secondary storage other than the swap file), the medium-term scheduler may actually
perform the role of the long-term scheduler, by treating binaries as "swapped out
The short-term scheduler (also known as the CPU scheduler) decides which of the
ready, in-memory processes are to be executed (allocated a CPU) after a clock interrupt,
an I/O interrupt, an operating system call or another form of signal. Its main objective is
to maximize system performance in accordance with the chosen set of criteria. Since it is
14
for each process switch to select the next process to be run. Given that any such change
could result in making the running process suspended or in making one or more
suspended processes ready, the short-term scheduler should be run to determine whether
such significant changes have indeed occurred and, if so, to select the next process to be
run. Some of the events occurred and, if so, to select the next process to be run. Some of
the events introduced thus far that cause rescheduling by virtue of their ability to change
In general, whenever one of these events occurs, the operating system invokes the
execution.
invocation of the short-term scheduler as part of their processing. For example, creating a
process or resuming a suspended one adds another entry to the ready list (queue), and the
scheduler is invoked to determine whether the new entry should also become the running
process. Suspending a running process, changing priority of the running process, and
15
exiting or aborting a process are also events that may necessitate selection of a new
running process, changing priority of the running process, and exiting or aborting a
process are also events that may necessitate selection of a new running process. Among
other things, this service is useful for invoking the scheduler from user-written event-
As indicated in Figure 3.1, interactive programs often enter the ready queue
directly after being submitted to the OS, which then creates the corresponding process.
Unlike-batch jobs, the influx of interactive programs are not throttled, and they may
conceivably saturate the system. The necessary control is usually provided indirectly by
deterioration response time, which tempts the users to give up and try again later, or at
Figure 3.1 illustrates the roles and the interplay among the various types of
schedulers in an operating system. It depicts the most general case of all three types being
present. For example, a larger operating system might support both batch and interactive
Smaller or special-purpose operating systems may have only one or two types of
support for batch, and the medium-term scheduler is needed only when swapping is used
by the underlying operating system. When more than one type of scheduler exists in an
operating system, proper support for communication and interaction is very important for
implying that it is capable of forcibly removing processes from a CPU when it decides to
16
allocate that CPU to another process, or non-preemptive (also known as "voluntary" or
"co-operative"), in which case the scheduler is unable to "force" processes off the CPU.
1. Processor utilization
2. Throughput
3. Turnaround time
4. Waiting time
5. Response time
The scheduler should also work for fairness, predictability, and repeatability, so that
similar workloads exhibit similar behavior. Although these two considerations may, in a
In practice, these goals often conflict (e.g. throughput versus latency), thus a
scheduler will implement a suitable compromise. Preference is given to any one of the
above mentioned concerns depending upon the user's needs and objectives. For simplicity
of presentation, in the description that follows, the terms job and process are used
Processor utilization is the average fraction of time during which the processor is
busy usually refers to the processor not being idle, and includes both the time spent
17
executing user programs and executing the operating system. With this interpretation,
special NULL process that runs when nothing else can run. An alternative is to consider
the user-state operation only and thus exclude the time spent executing the operating
system. In any case, the idea is that, by keeping the processor busy as much as possible,
other component utilization factors will also be high and provide a good return on
with processor utilization approaching 100%, average waiting times and average queue
Throughput refers to the amount of work completed in a unit of time. One way to
express throughput is by means of the number of user jobs executed in a unit of time. The
higher the number, the more work is apparently being done by the system. In closed
environments, where a more or less fixed collection of processes cycles in the system,
service demands throughput in the long run is dictated by external factors and is a
function of the request arrival rate and the processor service rate. In such systems,
scheduling basically affects the distribution of waiting times among the users.
Turnaround time, T, is defined as the time that elapses from the moment a
program or a job is submitted until it is completed by a system. It is the time spent in the
system, and it may be expressed as a sum of the job service time (execution time) and
waiting time.
18
Waiting time, W, is the time that a process or a job spends waiting for resource
waiting time is the penalty imposed for sharing resources with others. Waiting time may
W(x) = T(x) - x
where x is the service time, W(x) is the waiting time of the job requiring x units of
Waiting time is Equal to CPU time to each process (or more generally appropriate times
according to each process' priority). It is the time for which the process remains in the
ready queue. For example, a long job executed without preemption and a short job
executed with several preemptions may experience identical turnaround times. However,
the waiting times of the two jobs would differ and clearly indicate the effects and extent
Response time in interactive systems is defined as the amount of time it takes from when
a request was submitted until the first response is produced. This is usually called the
terminal response time. In real-time systems, on the other hand, the response time is
essentially latency. It is defined as the time from the moment an event (internal or
external) is signaled until the first instruction of its respective service routine is executed.
19
3.4 SCHEDULER DESIGN
performance criteria and ranking them in relative order of importance. The next step is to
design a scheduling strategy that maximizes performance for the specified set of criteria
while obeying the design constraints. One should intentionally avoid the word
schedule optimally. They are based on heuristic techniques that yield good or near-
optimal performance but rarely achieve absolutely optimal performance. The primary
reason for this situation lies in the overhead that would be incurred by computing the
consideration must be given to controlling the variance and limiting the worst-case
behavior. For example, a user experiencing 10-second response time to simple queries
has little consolation in knowing that the system's average response time is under 2
seconds.
One of the problems in selecting a set of performance criteria is that they often
conflict with each other. For example, increased processor utilization is usually achieved
by increasing the number of active processes, but then response time deteriorates. As is
the case with most engineering problems, the design of a scheduler usually requires
careful balance of all the different requirements and constraints. With the knowledge of
20
the primary intended use of a given system, operating-system designers tend to maximize
the criteria most important in a given environment. For example, throughput and
component utilization are the primary design objectives in a batch system. Multi-user
systems are dominated by concerns regarding the terminal response time, and real-time
operating systems are designed for the ability to handle burst of external events
responsively.
are used in routers (to handle packet traffic) as well as in operating systems (to
share CPU time among both threads and processes), disk drives (I/O scheduling), printers
we illustrate its working by using the term job or process for a unit of work, respectively.
non-preemption implies that, once scheduled, a selected job runs to completion. With
short-term scheduling, no preemption implies that the running process retains ownership
the operating system. In other words, the running process is not forced to relinquish
ownership of the processor when a higher-priority process becomes ready for execution.
However, when the running process becomes suspended as a result of its own action, say,
21
With preemptive scheduling, on the other hand, a running process may be
scheduler whenever an event that changes the state of the system is detected. The main
fairness amongst the parties utilizing the resources. Scheduling deals with the problem of
deciding which of the outstanding requests is to be allocated resources. There are many
Also known as First-in, First-out is by far the simplest scheduling discipline. The
of the FCFS scheduler is quite straightforward, and its execution results in little overhead.
By failing to take into consideration the state of the system and the resource
requirements of the individual scheduling entities, FCFS scheduling may result in poor
throughput rate may be quite low. Since there is no discrimination on the basis of the
required service, short jobs may suffer considerable turnaround delays and waiting times
when one or more long jobs are in the system. For example, consider a system with two
jobs, J1 and J2, with total execution times of 20 and 2 time units, respectively. If they
arrive shortly one after the other in the order J1-J2, the turnaround times are 20 and 22
time units, respectively (J2 must wait for J1 to complete), thus yielding an average of 21
time units. The corresponding waiting times are 0 and 20 unit, yielding an average of 10
time units. However, when the same two jobs arrive in the opposite order, J2-J1, the
22
average turnaround time drops to 11, and the average waiting time is only 1 time unit.
This simple example demonstrates how short jobs may be hurt by the long jobs in FCFS
systems, as well as the potential variability in turnaround and waiting times from one run
to another.
FCFS has relatively low throughput and poor event-response time due to the lack
notion and importance of process priorities. Process arrival times (i.e. becoming ready)
Shortest remaining time next is very similar to Shortest Job First (SJF). With this
strategy the scheduler arranges processes with the least estimated processing time
about the time required for a process to complete. SRTN scheduling may be implemented
SRTN is called shortest job first (SJF). In either case, whenever the SRTN scheduler is
invoked, it searches the corresponding queue (batch or ready) to find the job or the
process with the shortest remaining execution time. The difference between the two cases
lies in the conditions that lead to invocation of the scheduler and, consequently, the
whenever a job is completed or the running process surrenders control to the OS. The
scheduler is invoked to compare the remaining processor execution time of the running
process with the time needed to complete the next processor burst of the newcomer.
23
Depending on the outcome, the running process may continue, or it may be preempted
average waiting time of a given workload. If a shorter process arrives during another
preemption), dividing that process into two separate computing blocks. This creates
excess overhead through additional context switching. The scheduler must also place
each incoming process into a specific place in the queue, creating additional overhead.
The SRTN discipline schedules optimally assuming that the exact future
execution times of jobs or processes are known at the time of scheduling. Dependence on
because future process behavior is unknown in general and difficult to estimate reliably,
behavior, perhaps coupled with some other knowledge of the nature of the process and its
Pn = 0n-1 + (1 - )P-1
where 0n is the observed length of the (n-1)th execution interval, Pn-1 is the predictor for
the same interval, and is a number between 0 and 1. The parameter controls the
24
relative weight assigned to the past observations and predictions. For the extreme case of
= 1, the past predictor is ignored, and the new prediction equals the last observation.
relationship yields
n-1
Pn = (1 - )i0n-i-1
I=0
Thus the predictor includes the entire process history, with its more recent history
weighted more.
Many operating systems measure and record elapsed execution time of a process
in its PCB. This information is used for scheduling and accounting purposes.
imposes the overhead of predictor calculation at run time. Moreover, some additional
feedback mechanism is usually necessary for corrections when the predictor is grossly
incorrect.
scenarios. Waiting time and response time increase as the process's computational
requirements increase. Since turnaround time is based on waiting time plus processing
time, longer processes are significantly affected by this. Overall waiting time is smaller
than FIFO, however since no process has to wait for the termination of the longest
process.
process switching and scheduler invocation to examine each and every process transition
25
into the ready state. This work is wasted when the new ready process has a longer
SRTN provides good event and interrupt response times by giving preference to
the related service routines since their processor bursts are typically of short duration.
requirement is to provide reasonably good response time and, in general, to share system
resources equitably among all users. Obviously, only preemptive disciplines may be
considered in such environments, and one of the most popular is time slicing, also known
slice of time (i.e., one quantum) before being preempted. As each process becomes ready,
it joins the ready queue. A clock interrupt is generated at periodic intervals. When the
interrupt occurs, the currently running process is preempted, and the oldest process in the
ready queue is selected to run next. The time interval between each interrupt may vary.
It is one of the most common and most important scheduler. This is not the simplest
The processes that are ready to run (i.e. not blocked) are kept in a FIFO queue, called
There is a fixed time quantum (50 msec is a typical number) which is the maximum
26
The currently active process P runs until one of two things happens:
P blocks (e.g. waiting for input). In that case, P is taken off the ready queue; it is
P exhausts its time quantum. In this case, P is pre-empted, even though it is still
In either case, the process at the head of the ready queue is now made the active
process.
When a process unblocks (e.g. the input it's waiting for is complete) it is put at the
Suppose the time quantum is 40 msec, process P is executing, and it blocks after
15 msec. When it unblocks, and gets through the ready queue, it gets the standard 40
msec again; it doesn't somehow "save" the 25 msec that it missed last time.
of FCFS. The key parameter here is the quantum size q. When a process is put into the
running state a timer is set to q. If the timer goes off and the process is still running, the
Operating System preempts the process. This process is moved to the ready state where it
is placed at the rear of the ready queue. The process at the front of the ready list is
removed from the ready list and run (i.e., moves to state running). When a process is
created, it is placed at the rear of the ready list. As q gets large, RR approaches FCFS.
27
What value of q should we choose? Actually it is a tradeoff (1) Small q makes
system more responsive, (2) Large q makes system more efficient since less process
switching.
processes may be executed within a single time quantum and thus exhibit good response
times. Long processes may require several quanta and thus be forced to cycle through the
ready queue a few times before completion. With RR scheduling, response time of long
processes is directly proportional to their resource requirements. For long processes that
consist of a number of interactive sequences with the user, primarily the response time
between two such sequences may be completed within a single time slice, the user should
experience good response time. RR tends to subject long processes without interactive
sequences to relatively long turnaround and waiting times. Such processes, however, may
best be run in the batch mode, and it might even be desirable to discourage users from
preferably a dedicated one, as opposed to sharing the system time base. The timer is
usually set to interrupt the operating system whenever a time slice expires and thus force
the scheduler to be invoked. The scheduler itself simply stores the context of the running
process, moves it to the end of the ready queue, and dispatches the process at the head of
the ready queue. The scheduler is also invoked to dispatch a new process whenever the
running process surrenders control to the operating system before expiration of its time
28
quantum, say, by requesting I/O. The interval timer is usually reset at that point, in order
to provide the full time slot to the new running process. The frequent setting and resetting
of a dedicated interval timer makes hardware support desirable in systems that use time
slicing.
also one of the best-known scheduling disciplines for achieving good and relatively
evenly distributed terminal response time. The performance of round robin scheduling is
very sensitive to the choice of the time slice. For this reason, duration of the time slice is
The relationship between the time slice and performance is markedly nonlinear.
Reduction of the time slice should not be carried too far in anticipation of better response
time. Too short a time slice may result in significant overhead due to the frequent timer
interrupts and process switches. On the other hand, too long a time slice reduces the
Too short a time slice results in excessive overhead, and too long a time slice
Operating System rather than being preempted by the interval timer. The "optimal" value
of the time slice lies somewhere in between, but it is both system-dependent and
workload-dependent. For example, the best value of time slice for our example may not
turn out to be so good when other processes with different behavior are introduced in the
system, that is, when characteristics of the workload change. This, unfortunately, is
29
commonly the case with time-sharing systems where different types of programs may be
discriminates against long non-interactive jobs and depends on the judicious choice of
time slice for adequate performance. Duration of a time slice is a tunable system
The OS assigns a fixed priority rank to every process, and the scheduler arranges
the processes in the ready queue in order of their priority. Lower priority processes get
either case, the user or the system assigns their initial values at the process-creating time.
The level of priority may be determined as an aggregate figure on the basis of an initial
value, characteristic, resource requirements, and run-time behavior of the process. In this
sense, many scheduling disciplines may be regarded as being priority-driven, where the
priority processes may be effectively locked out by the higher priority ones. In general,
completion of a process within finite time of its creation cannot be guaranteed with this
30
collection of FIFO queues, one for each priority ranking. Processes in lower-priority
queues are selected only when all of the higher-priority queues are empty.
dynamically varying priorities to all processes and scheduling the highest-priority ready
the order in which an ED scheduler services coincident external events. However, the
high-priority ones may starve low-priority processes. Since it gives little consideration to
time systems, where each process must be guaranteed execution before expiration of its
executed cyclically with a known period, and of periodic processes, executed cyclically
with a known period, and of a periodic process whose arrival times are generally not
deadline scheduler, which schedules for execution the ready process with the earliest
deadline. Another form of scheduler, called the least laxity scheduler or the least slack
31
scheduler, has also been shown to be optimal in single-processor systems. This scheduler
selects the ready process with the least difference between its deadline and computation
The scheduling disciplines described so far are more or less suited to particular
should one use in a mixed system, with some time-critical events, a multitude of
interactive users, and some very long non-interactive jobs? This description easily fits
any university computing center, with a variety of devices and terminals (interrupts to be
serviced), interactive users (student programs), and simulations (batch jobs). One
may best service a mixed environment, each charged with what it does best. For example,
scheduling, interactive programs to round robin scheduling, and batch jobs to FCFS or
STRN.
division of the workload might be into system processes, interactive programs, and batch
jobs. This would result in three ready queues, as depicted in Figure 3.2. A process may be
assigned to a specific queue on the basis of its attributes, which may be user-or system-
supplied. Each queue may then be serviced by the scheduling discipline best suited to the
32
type of workload that it contains. Given a single server (the processor), some discipline
must also be devised for scheduling between queues. Typical approaches are to use
absolute priority or time slicing with some bias reflecting relative priority of the
processes within specific queues. In the absolute priority case, the processes from the
highest-priority queue (e.g. system processes) are serviced until that queue becomes
empty. The scheduling discipline may be event-driven, although FCFS should not be
ruled out given its low overhead and the similar characteristics of processes in that queue.
When the highest-priority queue becomes empty, the next queue may be serviced using
its own scheduling discipline (e.g., RR for interactive processes). Finally, when both
the upper-level queues. This discipline maintains responsiveness to external events and
certain percentage of the processor time to each queue, commensurate with its priority.
33
Figure 3.2 Multilevel queue Scheduling
advantages of the "pure" mechanisms discussed earlier. MLQ scheduling may also
more appropriate queue may offset the worst-case behavior of each individual discipline.
Potential advantages of MLQ were recognized early on by the O/S designers who have
usual form, uses a two-level queue-scheduling discipline. The workload of the system is
divided into two queues-a high-priority queue of interactive and time-critical processes
and other processes that do not service external events. The foreground queue is serviced
in the event-driven manner, and it can preempt processes executing in the background.
34
3.5.6 Multiple-Level Queues with Feedback Scheduling
than having fixed classes of processes allocated to specific queues, the idea is to make
traversal of a process through the system dependent on its run-time behavior. For
example, each process may start at the top-level queue. If the process is completed within
a given time slice, it departs the system after having received the royal treatment.
Processes that need more than one time slice may be reassigned by the operating system
to a lower-priority queue, which gets a lower percentage of the processor time. If the
process is still now finished after having run a few times in that queue, it may be moved
to yet another, lower-level queue. The idea is to give preferential treatment to short
processes and have the resource-consuming ones slowly "sink" into lower-level queues,
to be used as fillers to keep the processor utilization high. This philosophy is supported
decrease with attained service. In other words, the more service a process receives, the
less likely it is to complete if given a little more service. Thus the feedback in MLQ
mechanisms tends to rank the processes dynamically according to the observed amount of
attained service, with a preference for those that have received less.
On the other hand, if a process surrenders control to the OS before its time slice
expires, being moved up in the hierarchy of queues may reward it. As before, different
queues may be serviced using different scheduling discipline. In contrast to the ordinary
35
responsive to the actual, measured run-time behavior of processes, as opposed to the
multiple-level queue with feedback is the most general scheduling discipline that may
incorporate any or all of the simple scheduling strategies discussed earlier. Its overhead
may also combine the elements of each constituent scheduler, in addition to the overhead
4. SUMMARY
processor allocation. Three different schedulers may coexist and interact in a complex
deadline scheduling are dominant in real-time and other systems with time-critical
requirements. Multiple-level queue scheduling, and its adaptive variant with feedback, is
the most general scheduling discipline suitable for complex environments that serve a
Operating System Concepts, 5th Edition, Silberschatz A., Galvin P.B., John Wiley
& Sons.
36
Systems Programming & Operating Systems, 2nd Revised Edition, Dhamdhere
Operating Systems, Madnick S.E., Donovan J.T., Tata McGraw Hill Publishing
2000.
Operating Systems, Harris J.A., Tata McGraw Hill Publishing Company Ltd.,
What is a process?
Discuss various process scheduling policies with their cons and pros.
Which action should the short-term scheduler take when it is invoked but no
37
Directorate of Distance Education
Kurukshetra University
Kurukshetra-136119
PGDCA/MSc. (cs)-1/MCA-1
FILE SYSTEMS
STRUCTURE
1. Introduction
2. Objective
3. Presentation of Contents
3.2.3 Index-sequential
1
3.3.2 Linked Allocation
3.3.4 Performance
4. Summary
1. INTRODUCTION
The file system provides the mechanism for controlling both storage and retrieval of both
data and programs of the operating system. The file system consists of two distinct parts:
a collection of files, each storing associated data and a directory structure, which
organizes and provides information about all the files in the system. Some file systems
2
have a third part, partitions, which are used to separate physically or logically large
collections of directories.
2. OBJECTIVES
In this lesson, we discuss various file concepts and the variety of data structures. File
Access Methods such as Sequential Access, Direct Access Index-sequential along with
File Space Allocations such as Contagious Space Allocation, Linked Allocation, Indexed
Allocation will also be discussed. We have also elaborated the Logical Structure of
Acyclic-Graph Directories & General Graph Directory and various Directory Operations.
We also discuss the ways to handle file protection, which is necessary when multiple
users have access to files and where it is usually desirable to control by whom and in
3. PRESENTATION OF CONTENTS
The file system provides the means for controlling online storage and access to both data
and programs. The file system resides permanently on secondary storage, because file
system must be able to hold a large amount of data, permanently. This lesson is primarily
concerned with issues concerning file storage and access on the most common secondary
storage medium, the disk. We look at the ways to allocate disk space, to recover freed
space, to track the locations of data, and to interface other parts of the operating system to
secondary storage.
Each file is a distinct entity and therefore a naming convention is required to distinguish
one from another. The operating systems generally employ a naming system for this
3
purpose. In fact, there is a naming convention to identify each resource in the computer
Ordinary files.
Directory files.
Special files.
Ordinary Files
Ordinary files may contain executable programs, text files, binary files or databases. You
Directory Files
Directory files are the one that contain information that the system needs to access all
types of files such as list of file names and other information related to these files, but
directory files do not contain the actual file data. Some commands which are used for
Special Files
Special files are also known as device files. These files represent physical devices such as
terminals, disks, printers and tape-drives etc. These files are read from or written into
similar to ordinary files, but the operation on these files activates some physical devices.
These files can be of two types (i) character device files and (ii) block device file. In
character device files, data are handled character by character while in block device files,
data are handled in large chunks of blocks, as in the case of disks and tapes.
4
Read operation
Write operation
Execute
Coping file
Renaming file
Moving file
Deleting file
Creating file
Merging files
Sorting file
Appending file
Comparing file
A link is a special file that contains reference to another file or subdirectory in the form
of an absolute or relative path name. When a reference to a file is made, we search the
directory. The directory entry is marked as a link and the name of the real file (or
directory) is given. We determine the link by using the path name to locate the real file.
Links are easily identified by their format in the directory entry (or by their having a
special type on systems that support types), and are also called as indirect pointers.
A symbolic link can be deleted without deleting the actual file it links. There can be any
Symbolic links are useful in sharing a single file called by different names. Each time a
link is created, the reference count in its inode is incremented by one. Whereas deletion
of link decreases the reference count by one. The operating system cannot delete files
5
whose reference count is not 0, because non 0 reference count indicates that the file is in
use.
In a system where symbolic links are used, the deletion of a link does not need to affect
the original file; only the link is removed. If the file entry itself is deleted, the space for
the file is de-allocated, leaving the links dangling. We can search for these links and
remove them also, but unless a list of the associated link is kept with each file, this search
can be expensive. Alternatively, we can leave the links until an attempt is made to use
them. At that time, we can determine that the file of the name given by the link does not
exist, and can fail to resolve the link name; the access is treated just like any other illegal
file name. In the case of UNIX/LINUX, symbolic links are left when a file is deleted, and
it is up to the user to realize that the original file is gone or has been replaced.
Another approach to deletion is to preserve the file till the all references to it are deleted.
To implement this approach, we must have some mechanism for determining that the last
reference to the file has been deleted. We could keep a list of all references to a file
(directory entries or symbolic links). When a link or a copy of the directory entry is
established, a new entry is added to the file-reference list. When a link or directory entry
is deleted, we remove its entry on the list. The file is deleted when its file-reference list is
empty.
The trouble with this approach is that the variable and potentially large size of the file-
reference list. However, we need to keep only a count of the number of references. A new
link or directory entry increments the reference count; deleting a link or entry decrements
the count. Once the count is 0, the file will be deleted; there are no remaining references
to it. The UNIX/LINUX operating system uses this approach for non-symbolic links, or
hard links, keeping a reference count within the file information block or inode). By
6
effectively prohibiting multiple references to directories, we have tendency to maintain
an acyclic-graph structure.
To avoid these issues, some systems do not permit shared directories links. For example,
In a multi-user environment a file is needed to be shared among more than one user.
There are many techniques and approaches to affect this operation. Simple approach is to
copy the file at the user’s local hard disk. This approach primarily creates different files
Read only: In this mode, the user can only read or copy the file.
Linked shared: In this mode, all the users sharing the file can make changes in this
file. However, the changes are reflected in the order determined by the operating
systems.
Exclusive mode: In this mode, a single user who can make the changes (while others
Another approach is to share a file through symbolic links. This approach poses a few
issues - concurrent updation problem, deletion problem. If two users try to update the
same file, the updating of one of them will be reflected at a time. Besides, another user
Locking is mechanism through which operating systems make sure that the user making
changes to the file is the one who has the lock on the file. As long as the lock remains
with this user, no other user can make changes in the file.
7
Disks provide the large secondary storage on which a file system is maintained. To
enhance I/O efficiency, I/O transfers between memory and disks are performed in units of
blocks. Each block is one or more sectors. Depending on the disk drive, sectors vary from
32 bytes to 4096 bytes; usually, they are 512 bytes. Disks have two vital characteristics
They can be rewritten in place; it is possible to read a block from the disk, to
modify the block, and to write it back into the same place.
One can access directly any given block of information on the disk. Thus, it is
easy to access any file either sequentially or randomly, and switching from
one file to another added requires only moving the read-write heads and
To provide an efficient and convenient access to the disk, the operating system imposes a
file system to allow the data to be stored, located, and retrieved easily. A file system
poses two different design issues. The first problem is defining how the file system
should look to the user. This task involves the definition of a file and its attributes,
operations allowed on a file and the directory structure for organizing the files. Next,
algorithms and data structure must be created to map the logical file system onto the
Just as a file must be opened before it is used, a file system must be mounted before it can
be available to processes on the system. The mount procedure is simple. The operating
system is given the name of the device and also the location within the file structure at
which to attach the file system (called the mount point). For example, on the
UNIX/LINUX system, a file system containing user’s home directory might be mounted
as /home; then, to access the directory structure within that file system, one could precede
8
the directory names with /home, as in /home/sanjay. Mounting that file system under
/users would result in the path name /users/sanjay to reach the same directory.
Next, the operating system verifies that the device contains a valid file system. It does so
by asking the device driver to read the device directory and verifying that the directory
has the expected format. Finally, the operating system notes its directory structure that a
file system is mounted at the specified mount point. This scheme enables the operating
system to traverse its directory structure, switching among file systems as appropriate.
Consider the actions of the Macintosh Operating System. Whenever the system
encounters a disk for the first time (hard disks are found at boot time, floppy disks are
seen once they are inserted into the drive), the Macintosh Operating System searches for
a file system on the device. If it finds one, it automatically mounts the file system at the
boot-level, adds a folder icon to the screen labelled with the name of the file system (as
stored in the device directory). The user is then ready to click on the icon and thus to
Attributes are properties of a file. The operating system treats a file according to its
H for hidden
A for archive
D for directory
X for executable
W for Write
9
Files store information, which is when needed, may be read into the main memory. There
are several different ways, in which the data stored in a file may be accessed for reading
and writing. The operating system is responsible for supporting these file access methods.
A sequential file is the most primitive of all file structures. It has no directory and no
linking pointers. The records are usually organized in lexicographic order on the value of
some key. In other words, a specific attribute is chosen whose value will determine the
order of the records. Sometimes when the attribute value is constant for a large number of
records, a second key is chosen to give an order when the first key fails to distinguish.
The implementation of this file structure requires the use of a sorting routine.
It is simple to implement
Its disadvantages:
It is difficult to update - inserting a new record may require moving a large proportion
of the file
Sometimes a file is considered to be sequentially organised despite the fact that it is not
ordered according to any key. Perhaps the date of acquisition is considered to be the key
value, the newest entries are added to the end of the file and so their no difficulty to
updating.
Sometimes it is not necessary to process every record in a file. It may not be necessary to
process records in the order in which they are present. Information present in a record of
10
a file is to be accessed only if some key value in that record is known. In all such cases,
direct access is used. Direct access is based on the disk that is a direct access device and
allows random access of any file block. Since a file is a collection of physical blocks, any
block and hence the records in that block are accessed. For example, master files.
Databases are often of this type since they allow query processing that involves
immediate access to large amounts of information. All reservation systems fall into this
category. Not all operating systems support direct access files. Usually files are to be
defined as sequential or direct at the time of creation and accessed accordingly later.
Sequential access of a direct access file is possible but direct access of a sequential file is
not.
This access method is a slight modification of the direct access method. It is in fact a
combination of both the sequential access as well as direct access. The main concept is to
access a file direct first and then sequentially from that point onwards. This access
record in a file, a direct access of the index is made. The information obtained from this
access is used to access the file. For example, the direct access to a file will give the
block address and within the block the record is accessed sequentially. Sometimes
indexes may be big. So hierarchies of indexes are built in which one direct access of an
index leads to info to access another index directly and so on till the actual file is
accessed sequentially for the particular record. The main advantage in this type of access
The direct-access nature of disks permits flexibility within the implementation of files. In
almost every case, several files will be stored on the same disk. One main problem in file
11
management is how to allocate space for files so that disk space is utilized effectively and
files can be accessed quickly. Three major methods of allocating disk space are
contiguous, linked, and indexed. Each method has its advantages and disadvantages.
Accordingly, some systems support all three (e.g. Data General's RDOS). More
commonly, a system will use one particular method for all files.
The contiguous allocation method requires each file to occupy a set of contiguous
address on the disk. Disk addresses define a linear ordering on the disk. Notice that, with
this ordering, accessing block b+1 after block b normally requires no head movement.
When head movement is needed (from the last sector of one cylinder to the first sector of
the next cylinder), it is only one track. Thus, the number of disk seeks required for
accessing contiguous allocated files in minimal, as is seek time when a seek is finally
needed. Contiguous allocation of a file is defined by the disk address and the length of
the first block. If the file is n blocks long, and starts at location b, then it occupies blocks
b, b+1, b+2, …, b+n-1. The directory entry for each file indicates the address of the
starting block and the length of the area allocated for this file.
The difficulty with contiguous allocation is finding space for a new file. If the file to be
created is n blocks long, then the OS must search for n free contiguous blocks. First-fit,
best-fit, and worst-fit strategies are the most common strategies used to select a free hole
from the set of available holes. Simulations have shown that both first-fit and best-fit are
better than worst-fit in terms of both time & storage utilization. Neither first-fit nor best-
fit is clearly best in terms of storage utilization, but first-fit is generally faster.
These algorithms also suffer from external fragmentation. As files are allocated and
deleted, the free disk space is broken into little pieces. External fragmentation exists
when enough total disk space exists to satisfy a request, but this space not contiguous;
12
storage is fragmented into a large number of small holes.
Another problem with contiguous allocation is determining how much disk space is
needed for a file. When the file is created, the total amount of space it will need must be
known and allocated. How does the creator (program or person) know the size of the file
to be created. In some cases, this determination may be fairly simple (e.g. copying an
existing file), but in general the size of an output file may be difficult to estimate.
The problems in contiguous allocation can be traced directly to the requirement that the
spaces be allocated contiguously and that the files that need these spaces are of different
In linked allocation, each file is a linked list of disk blocks. The directory contains a
pointer to the first and (optionally the last) block of the file. For example, a file of 5
blocks which starts at block 4, might continue at block 7, then block 16, block 10, and
finally block 27. Each block contains a pointer to the next block and the last block
contains a NIL pointer. The value -1 may be used for NIL to differentiate it from block 0.
With linked allocation, each directory entry has a pointer to the first disk block of the
file. This pointer is initialized to nil (the end-of-list pointer value) to signify an empty
file. A write to a file removes the first free block and writes to that block. This new block
is then linked to the end of the file. To read a file, the pointers are just followed from
block to block.
There is no external fragmentation with linked allocation. Any free block can be used to
satisfy a request. Notice also that there is no need to declare the size of a file when that
file is created. A file can continue to grow as long as there are free blocks.
Linked allocation, does have disadvantages, however. The major problem is that it is
13
inefficient to support direct-access; it is effective only for sequential-access files. To find
the ith block of a file, it must start at the beginning of that file and follow the pointers
until the ith block is reached. Note that each access to a pointer requires a disk read.
Another severe problem is reliability. A bug in OS or disk hardware failure might result
in pointers being lost and damaged. The effect of it could be picking up a wrong pointer
The indexed allocation method is the solution to the problem of both contiguous and
linked allocation. This is done by bringing all the pointers together into one location
called the index block. Of course, the index block will occupy some space and thus could
In indexed allocation, each file has its own index block, which is an array of disk sector
of addresses. The ith entry in the index block points to the ith sector of the file. The
directory contains the address of the index block of a file (Figure 7.1). To read the ith
sector of the file, the pointer in the ith index block entry is read to find the desired sector.
Indexed allocation supports direct access, without suffering from external fragmentation.
Any free block anywhere on the disk may satisfy a request for more space.
14
3.3.4 Performance
The allocation methods that we have mentioned vary in their storage efficiency and data-
block access times. Both are necessary criteria in selecting the proper method or methods
One difficulty in comparing the performance of the various systems is determining how
the systems will be used. A method used for system with sequential access should use a
method different from that for a system with mainly random access. For any type of
access, contiguous allocation requires only one access to get a disk block. Since we can
easily keep the initial address of the file in memory, we can calculate immediately the
disk address of the ith block (or the next block) and read it directly.
For linked allocation, we can also keep the address of the next block in memory and read
it directly. This method is fine for sequential access; for direct access, however, an access
to the ith block might require i disk reads. This problem signifies why linked allocation
As a result, some systems support direct-access files by using contiguous allocation and
sequential access by linked allocation. For these systems, the type of access to be made
must be declared when the file is created. A file created for sequential access will be
linked and cannot be used for direct access. A file created for direct access will be
contiguous and can be used for both direct access and sequential access, but its maximum
In a typical file storage system, various files are to be stored on storage of giga-byte
capacity. In order to handle such situation, the files are to be organized. The organization,
usually, done in two parts. In the first part, the file system is broken into partitions, also
15
which is a low-level structure in which files and directories reside. Sometimes, there may
be more than one partition on a disk, each partition acting as a virtual disk. The users do
not have to concern themselves with the translating the physical address; the system does
Partitions contain information about itself in a file called partition table. It also contains
information about files and directories on it. Typical file information is name, size, type,
location etc. The entries are kept in a device directory or volume table of contents.
The file systems of computers can be extensive. Some systems store thousands of files on
hundreds of gigabytes of disk. To manage all these data, we should organize them. This
organization is generally done in two parts; first, the file system is broken into in the IBM
16
world or volumes in the PC and Macintosh arenas. Sometimes, partitions are used to
provide several separate areas within one disk, each treated as a separate storage device,
whereas other systems allow partitions to be larger than a disk to group disks into one
logical structure. In this way, the user needs to be concerned with only the logical
directory and file structure, and can ignore completely the problems of physically
allocating space for files. For this reason partitions can be thought of as virtual disks.
Second, every partition contains information about files within it. This information is
kept in a device directory or volume table of contents. The device directory records
information such as name, location, size, and type for all files on that partition.
In a single-level directory system, all the files are placed in one directory. This is very
A single-level directory has significant limitations, however, when the number of files
increases or when there is more than one user. Since all files are in the same directory,
they must have unique names. If there are two users who call their data file "test", then
the unique-name rule is violated. Although file names are generally selected to reflect the
remember the names of all the files in order to create only files with unique names.
17
Figure 7.3 Single-level Directory
In the two-level directory system, the system maintains a master block that has one entry
for each user. This master block contains the addresses of the directory of the users.
There are still problems with two-level directory structure. This structure effectively
isolates one user from another. This is an advantage when the users are completely
independent, but a disadvantage when the users want to cooperate on some task and
access files of other users. Some systems simply do not allow local files to be accessed
by other users.
18
In the tree-structured directory, the directory themselves are files. This leads to the
deletion of a directory. If a directory is empty, its entry in its containing directory can
simply be deleted. However, suppose the directory to be deleted id not empty, but
contains several files, or possibly sub-directories. Some systems will not delete a
directory unless it is empty. Thus, to delete a directory, someone must first delete all the
files in that directory. If these are any sub-directories, this procedure must be applied
recursively to them, so that they can be deleted also. This approach may result in a
request is made to delete a directory, all of that directory's files and sub-directories are
also to be deleted.
19
The acyclic directory structure is an extension of the tree-structured directory structure.
In the tree-structured directory, files and directories starting from some fixed directory
are owned by one particular user. In the acyclic structure, this prohibition is taken out and
One major problem with using an acyclic graph structure is ensuring that there no cycles.
If we start with a two-level directory and allow users to create subdirectories, a tree-
structured directory is created. It should be very easy to see that simply adding new files
and subdirectories to existing tree structure preserves the tree-structured nature. However,
when we add links to an existing tree-structured directory, the tree structure is destroyed,
The main advantage of an acyclic graph is the relative simplicity of the algorithms to
traverse file is the graph and to determine when there are no more references to a file. We
20
want to avoid file is traversing shared sections of an acyclic graph twice, mainly for
particular file, without finding that file, we want to avoid searching that subdirectory
If cycles are allowed to exist in the directory, we likewise want to avoid searching any
algorithm might tries or result in an infinite loop frequently searching through the cycle
and never terminating. One solution is to arbitrarily limit the number of directories,
A similar problem exists when we are trying to find out when a file can be deleted. As
with acyclic-graph directory structures, a value zero in the reference count means that
there are no more references to the file or directory, and the file can be deleted. However,
it is also possible that when cycles exist and the reference count may be nonzero, even
when it is no longer possible to refer to a directory or file. This anomaly results from the
21
possibility of self-referencing (a cycle) in the directory structure. In this case, it is usually
necessary to use a garbage collection scheme to determine when the last reference has
The directory can be viewed as a symbol table that translates file names into their
directory entries. If we take such a view, then it becomes obvious that the directory itself
can be organized in many ways. We want to be able to insert entries, to delete entries, to
search for a named entry, and to list all the entries in the directory.
Now, we examine several schemes for defining the logical structure of the directory
system. When considering a particular directory structure, we have to keep in mind the
find the entry for a particular file. Since files have symbolic names and similar
Create a directory: New files need to be created and added to the directory.
the directory.
List a directory: We need to be able to list the files in a directory and the
Rename a directory: Because the name of a file represents its contents to its
users, the name must be changeable when the contents or use of the file
changes. Renaming a file may also allow its position within the directory
structure to be changed.
22
Traverse the file system: It is useful to be able to access every directory and
every file within a directory structure. For reliability it is a good idea to save
the contents and structure of the entire file system at regular intervals. This
saving often consists of copying all files to magnetic tape. This technique
longer in use. In this case, the file can be copied to tape, and the disk space of
Duplicate copies of files generally provide reliability. Many computers have systems
programs that automatically (or through computer-operator intervention) copy disk files
to tape at regular intervals (once per day or week or month) to maintain a copy should a
writing), power surges or failures, head crashes, dirt, temperature extremes, and
vandalism. Files may be deleted accidentally. Bugs in the file-system software can also
Protection can be provided in different ways. For a small single-user system, we might
provide protection by physically removing the floppy disks and locking them in a desk
drawer or file cabinet. In a multi-user system, however, other mechanisms are needed.
23
The necessity for protecting files is a direct result of the ability to access files. On
systems that do not allow access to the files of other users, protection is not required.
Thus, one extreme would be to provide complete protection by prohibiting access. The
other extreme is to provide free access with no protection. Both of these approaches are
too extreme for general use. What is needed is the controlled access.
Protection mechanisms give controlled access by limiting the kinds of file access that can
be made. Access is granted or denied depending on many factors, one of which is the
Delete - Delete the file and free its space for possible reuse.
Other operations, such as renaming, copying, or editing the file, may also be controlled.
For many systems, however, these higher-level functions (such as copying) may be
provided at only the lower level. For instance, copying a file may be implemented simply
by a sequence of read requests. In this case, a user with read access can also cause the file
Many different protection mechanisms have been proposed. Each scheme has its
application. A small computer system that is used by only a few members of a research
group may not need the same types of protection as will a large corporate computer that
24
3.5.2 Access Lists and Groups
The most common approach to the protection problem is to make access dependent on
the identity of the user. Various users may need different types of access to a file or
associate with each file and directory an access list, specifying the user name and the
When a user requests access to a specific file, the operating system first checks the access
list associated with that file. If that user is listed for the requested access, the access is
allowed. Otherwise, a protection violation occurs, and the user job is denied access to the
file.
The main problem with access lists is their length. If we want to allow everyone to read a
file, we must list all users with read access. This technique has two undesirable
consequences:
The directory entry that previously was of fixed size needs now to be of
These problems can be resolved by use of a condensed version of the access list. To
condense the length of the access list, many systems recognize three classifications of
Group - A set of users who are sharing the file and need similar access is a
group or workgroup.
25
As an example, consider a person, Sara, who is writing a new book. She has hired three
graduate students (Jim, Dawn, and Jill) to help with the project. The text of the book is
kept in a file named book. The protection associated with this file allows:
Jim, Dawn; and Jill should be able only to read and write the file; they should
All others users should be able to read the file. (Sara is interested in letting as
many people as possible read the text so that she can obtain appropriate
feedback.
To achieve such a protection, we must create a new group, say text, with members Jim,
Dawn, and Jill. The name of the group text must be then associated with the file book,
and the access right must be set in accordance with the policy we have outlined.
Note that, for this scheme to work properly, group membership must be controlled
tightly. This control can be accomplished in a number of different ways. For example, in
the UNIX system, groups can be created and modified by only the manager of the facility
(or by any super-user). Thus, this control is achieved through human interaction. In the
VMS system, with each file, an access list (also known as an access control list) may be
associated, listing those users who can access the file. The owner of the file can create
With this more limited protection classification, only three fields are needed to define
protection. Each field is often a collection of bits, each of which either allows or prevents
the access associated with it. For example, the UNIX system defines three fields of 3 bits
each: rwx, where r controls read access, w controls write access and x controls execution.
A separate field is kept for the file owner, for the owner's group and for all other users. In
this scheme, 9 bits per file are needed to record protection information. Thus, for our
26
example, the protection fields for the file book are as follows; For the owner Sara, all 3
bits are set; for the group text, the r and w bits are set; and for the universe. Notice,
Another approach to the protection problem is to associate a password with each file. Just
as access to the computer system itself is often controlled by a password, access to each
If the passwords are chosen randomly and changed frequently, this method may be
effective in limiting access to a file to only those users who know the password.
password with each file, then the number of passwords that a user needs to remember
may become very large, making the scheme impractical. If only one password is used for
all the files, then, once it is exposed, all files are accessible. Some systems (for example,
TOPS-20) allow a user to associate a password with a subdirectory, rather than with an
individual file, to deal with this problem. The IBM VM/CMS operating system allows
three passwords for a minidisk: one each for read, write, and multi write access. Second,
commonly, only one password is associated with each file. Hence, protection is on an all-
or-nothing basis. To provide protection on a more detailed level, we must use multiple
passwords.
Limited file protection is also currently available on single user systems, such as MS-
DOS and Macintosh operating system. These operating systems, when originally
designed, essentially ignored dealing with the protection problem. However, since these
systems are being placed on networks where file sharing and communication is
necessary, protection mechanisms have to be retrofitted into the operating system. Note
that it is almost always easier to design a feature into a new operating system than it is to
27
add a feature to an existing one. Such updates are usually less effective and are not
seamless.
We should note that, in a multilevel directory structure, we not only need to protect
individual files, but also to protect collections of files contained in a subdirectory, that is,
The directory operations that must be protected are somewhat different from the file
addition, we probably want to control whether a user can determine the existence of a file
operation. Therefore, if a path name refers to a file in a directory, the user must be
allowed access to both the directory and the file. In systems, where files may have
numerous path names (such as acyclic or general graphs), a given user may have different
4. SUMMARY
The file system resides permanently on secondary storage, which has the main
The various files can be allocated space on the disk in three ways: through contagious,
may require substantial overhead for its index block. There are many ways in which these
Free space allocation methods also influence the efficiency of the use of disk space, the
28
Operating System Concepts, 5th Edition, Silberschatz A., Galvin P.B.,
Delhi.
Asia, 2000.
29
CS-DE-15
LESSON NO. 8
STRUCTURE
1. Introduction
2. Objectives
3. Presentation of Contents
3.4 Formatting
3.5 RAID
4. Summary
1. INTRODUCTION
1
operating system wants the capability to deal with a large number of devices. In some
systems, a machine is connected with the thousands of different devices and each device
accomplished in a parallel environment. Devices use their own timing and operate
independent of the CPU. The operating system must be able to deal with concurrent
physical characteristics of CD-ROMs, floppy disks, main memory and printers, but
applications should be able to read from or write to these devices, like if they were all the
same.
2. OBJECTIVES
Device management services are provided not just to application programs. File
management is built upon the abstract I/O system described in this lesson. Using the
the task of creating a file system. Similarly, swapping software relies on device
management to handle its I/O requirements. The present lesson aims at presenting
3. PRESENTATION OF CONTENTS
Disk comes in many sizes and speeds, and information may be stored optically or
magnetically. However, all disks share a number of important features. A disk is a flat
circular object called a platter. Information may be stored on both sides of a platter
2
(although some multiplatter disk packs do not use the top most or bottom most surfaces).
Platter rotates around its own axis. The circular surface of the platter is coated with a
perform read/write operations on the disk. The read write head can move radially over the
magnetic surface. For each position of the head, the recorded information forms a circular
track on the disk surface. Within a track information is written in blocks. The blocks may
be of fixed size or variable size separated by block gaps. The variable length size block
scheme is flexible but difficult to implement. Blocks can be separately read or written.
The disk can access any information randomly using an address of the record of the form
(track no, record no.). When the disk is in use a drive motor spins it at high speed. The
read/write head positioned just above the recording surface stores the information
magnetically on the surface. On floppy disks and hard disks, the media spins at a
constant rate. Sectors are organized into a number of concentric circles or tracks. As one
moves out from the center of the disk, the tracks get larger. Some disks store the same
number of sectors on each track, with outer tracks being recorded using lower bit
densities. Other disks place more sectors on outer tracks. On such a disk, more
information can be accessed from an outer track than an inner one during a single rotation
of the disk.
3
Figure 8.1: Moving head disk mechanism
To access a block from the disk, first of all the system has to move the read/write head to
the required position. The time consumed in this operation is known as seek time and the
head movement is called seek. When anything is read or written to a disc drive, the
read/write head of the disc needs to move to the right position. The actual physical
positioning of the read/write head of the disc is called seeking. The amount of time that it
takes the read/write head of the disc to move from one part of the disk to another is called
the seek time. The seek time can differ for a given disc due to the varying distance from
the start point to where the read/write head has been instructed to go. Because of these
4
variables, seek time is generally measured as an average seek time. Seek time (S) is
S=H*C+I
Rotational latency (sometimes called rotational delay or just latency) is the delay waiting
for the rotation of the disk to bring the required disk sector under the read-write head. It
depends on the rotational speed of a disk, measured in revolutions per minute (RPM).
Once the head is positioned at the right track, the disk is to be rotated to move the desired
block under the read/write head. On average this latency will be one-half of one
revolution. Thus latency can be computed by dividing the number of revolution per
L = 30 / R
Finally the actual data is transferred from the disk to main memory. The time consumed
in this operation is known as transfer time. Transfer time T, is determined by the amount
of information to be read, B; the number of bytes per track, N; and the rotational speed.
T = 60B/RN
So the total time (A) to service a disk request is the sums of these three i.e. seek time,
A=S+L+T
5
Since most of the systems depend heavily on the disk, so it become very important to
desire to reduce the access time, increase the capacity of the disk and to make optimum
use of disk surface. For example there may be one head for every track on the disk
easy for computer to switch from one track to another quickly but it makes the disk very
expensive due to the requirement of a number of heads. Generally there is one head that
Higher disk capacities are obtained by mounting many platters on the same spindle to
form a disk pack. There is one read/write head per circular surface of a platter. All heads
of the disk are mounted on a single disk arm, which moves radially to access different
tracks. Since the heads are located on the identically positioned tracks of different
surfaces, so such tracks can be accessed without any further seeks. So placing that data in
The hardware for a disk system can be divided into two parts. The disk drive is the
mechanical part, including the device motor, the read/write heads and associated logic.
The other part called the disk controller determines the logical interaction with the
computer. The controller takes instructions from the CPU and orders the disk drive to
Every disk drive has a queue of pending requests to be serviced. Whenever a process
needs I/O to or from the disk, it issues a request to the operating system, which is placed
6
in the disk queue. The request specifies the disk address, memory address, amount of
For a multiprogramming system with many processes, the disk queue may often be
nonempty. Thus, when a request is complete, the disk scheduler has to pick a new request
switched off the primary memory loses the stored information whereas the secondary
memories retain the stored information. The most common secondary storage is a disk. In
Figure 8.1, we have described the mechanism of a disk and information storage on it. A
disk has several platters. Each platter has several rings or tracks. The rings are divided
into sectors where information is actually stored. The rings with similar position on
different platters are said to form a cylinder. As the disk spins around a spindle, the heads
transfer the information from the sectors along the rings. Note that information can be
read from the cylinder surface without any additional lateral head movement. So it is
always a good idea to organize all sequentially-related information along a cylinder. This
is done by first putting it along a ring and then carrying on with it across to a different
platter on the cylinder. This ensures that the information is stored on a ring above or
below this ring. Information on different cylinders can be read by moving the arm by
relocating the head laterally. This requires an additional arm movement resulting in some
delay, often referred to as seek latency in response. Clearly, this delay is due to the
mechanical structure of disk drives. In other words, there are two kinds of mechanical
delays involved in data transfer from disks. The seek latency, as explained earlier, is due
7
to the time required to move the arm to position the head along a ring. The other delay,
called rotational latency, refers to the time spent in waiting for a sector in rotation to
come under the read or write head. The seek delay can be considerably reduced by having
a head per track disk. The motivation for disk scheduling comes from the need to keep
both the delays to a minimum. Usually a sector which stores a block of information
additionally has a lot of other information. For example, a 512 byte block has nearly 100
A user as well as a system spends a lot of time of operation communicating with files
(programs, data, system utilities, etc.) stored on disks. All such communications have the
following components:
4. The starting address in the disk and the current status of the transfer.
The disk I/O is always in terms of blocks of data. So even if one word or byte is required
we must bring in (or write in) a block of information from (to) the disk. Suppose we have
only one process in a system with only one request to access data. In that case, a disk
access request leads finally to the cylinder having that data. However, because processor
and memory are much faster than disks, it is quite possible that there may be another
request made for disk I/O while the present request is being serviced. This request would
queue up at the disk. With multi-programming, there will be many user jobs seeking disk
8
access. These requests may be very frequent. In addition, the information for different
users may be on completely different cylinders. When we have multiple requests pending
on a disk, accessing the information in a certain order becomes very crucial. Some
policies on ordering the requests may raise the throughput of the disk, and therefore, that
of the system.
As apparent, the amount of head movement needed to satisfy a series of I/O requests
could affect the performance. For this reason, a number of scheduling algorithms have
been proposed.
First come first serve is the simplest form of disk scheduling. This algorithm services
requests in the order they are received. Let us illustrate it with a request queue (0-199).
Suppose initially the disk head is at cylinder 53, and then it will first move to 98, then to
183, then to 37 and so on and finally to 67, with a total head movement of 640 cylinders.
9
3.2.2 Shortest Seek Time First (SSTF)
Shortest Seek Time First services the request with minimum seek time from the current
track position. Similar to Shortest job first CPU Scheduling algorithm, this algorithm
Cylinder 65 is the closest request position corresponding to the initial head position (53).
Once we are at cylinder 65, the next closest request is 67, then is 37 and so on and finally
leads to the total head movement of 238 cylinders as shown in Fig 8.3. This algorithm is
3.2.3 SCAN
In Scan algorithm, the read/write head move back and forth between the innermost and
outermost tracks. As the head approaches to a track, it satisfies all the outstanding
requests for that track. The read write disk arms starts at one end of the disk and moves
toward the other end and continue servicing requests until it approaches to the other end
10
of the disk, then the head movement is reversed and again continue the servicing of
This algorithm is also known as the elevator algorithm. Total 208 cylinder head
3.2.4 LOOK
In this algorithm, disk arm head start servicing the request in one direction. The head
service the request for the closest track in that direction. The arm ends only as far as the
final request in each direction and then, it immediately reverses direction, without going
to the end of the disk. This algorithm is like to SCAN but, it is differ from SCAN in the
way that the head does not unnecessarily service to the innermost and outermost track on
each cylinder.
11
3.2.5 C-SCAN
This algorithm is the circular version of SCAN algorithm. It offers more homogeneous
wait time than SCAN. The head shift from one end of the disk to the other and handles
requests as it moves towards the end. When it arrives at the other end, conversely, it
instantly comes back to the start of the disk, without servicing any requests on the return
tour. This algorithm treats the cylinders as a circular list with the aim of wrapping in the
region of from the last cylinder to the first one. Let us consider the example shown in
Figure 8.5.
3.2.6 C-LOOK
"Circular" versions of LOOK algorithm that only assure requests while going in one
direction. As the arm head reached to the last, the algorithm return back to the starting
track as shown in Figure 8.6. C-LOOK is better than the LOOK as it minimizes the delay
12
Figure 8.6 C-LOOK Disk Scheduling
Firstly, the request queue is divided into subqueues. Each subqueues having the
maximum length of N. All the subqueues are processed in FIFO manner. In order to
are placed in the next unfilled subqueue while servicing a subqueue. There is a possibility
3.2.8 FSCAN
The "F" stands for "freezing" the request queue at a certain time. It is just like N-step
scan but there are two sub queues only and each is of unlimited length. While requests in
one sub queue are serviced, new requests are placed in other sub queue.
a scheduling algorithm that will optimize the performance. The commonly used
13
algorithm is Shortest-Seek-Time-First and it has a natural appeal also. San and its
variants are more appropriate for system with a heavy load on the disk. It is possible to
define an optimal scheduling algorithm, but computational overheads required for that
No doubt in any scheduling algorithm the performance depends on the number and types
of the requests. If every time there is only one outstanding request, then the performance
of all the scheduling algorithms will be more or less equivalent. Studies suggest that even
by the file allocation method. The requests generated by the contiguously allocated files
will result in minimum movement of the head. But in case of indexed access and direct
access where the blocks of a file are scattered on the disk surface resulting into a better
utilization of the storage space, there may be a lot of movement of the head.
In all these algorithms, to improve the performance, the decision is taken on the basis of
head movement i.e. seek time. Latency time is not considered as a factor. Because it is
not possible to predict latency time because rotational location cannot be determined.
However, multiple requests for the same track may be serviced based on latency.
3.4 Formatting
All the administrative data must be written to the disks and organizing it into sectors
before writing the data to a disk. This low-level formatting or physical formatting is often
done by the manufacturer. During formatting process, many sectors could be found to be
defective. Many disks have additional sectors, and a remapping mechanism substitutes
14
For sectors which fail after formatting, the operating system may implement a bad
block mechanism. Such mechanisms are usually in terms of blocks and are implemented
at a level above the device driver. Disk performance can be affected by the manner in
which sectors are located on a track. If disk I/O operations are limited to transferring to
read multiple sectors in sequence, single sector at a time, separate I/O operations should
be performed. Interrupt must be processed and the second I/O operation must be issued
after the completion of first I/O operation. During this time, the disk continues to spin. If
the start of the next sector to be read has already spun past the read/write head, the sector
cannot be read until the next revolution of the disk brings the cylinder by the read/write
head. In a worst-case scenario, the disk must wait almost a full revolution. Sectors may
how far the disk revolves in the time from the end of one I/O operation until the
controller can issue a subsequent I/O operation. The sector layout with different degrees
contains enough amount of memory to store the entire track; so, a single I/O operation
can be used to read all the sectors on a track. Interleaving is most commonly used on less
15
Figure 8.7 Interleaving
Many operating systems also provide the capability for disks to be divided into
one or more virtual disks called partitions. On personal computers, DOS, Windows, and
UNIX all stick to a common partitioning scheme so that they all may co-reside on a
single disk.
An empty file system must be created on a disk partition, before writing the files
to a disk. This requires the different data structures for different file systems to be written
to the partition, and is called as a high level format or logical format. Sometimes file
system is not required to make use of a disk. Some operating systems let the applications
to write directly to a disk device. For such an application, a directly accessible disk
device is just a large sequential collection of storage blocks. In such a case, it is the
3.5 RAID
reliability. RAID can be implemented in hardware or in the operating system. There are
six different types of RAID systems that are described below and illustrated in Fig. 8.8.
16
RAID level 0 develops one large virtual disk from a number of smaller disks.
Storage is combined into logical units called strips with the size of a strip being some,
multiple (possibly one) of the sector size. The virtual storage is a sequence of array of
strips interleaved among the disks. The basic benefit of RAID-0 is its capability to
generate a large disk. But its reliability benefits are limited. Files generally gets scattered
over a number of disks. Hence, even after a disk failure, some file data can be retrieved
safely. Performance benefits can be achieved by accessing the sequentially stored data.
The second disk can start reading the second strip, when the first disk is in the
process of reading the first strip. If there are N disks in the array, N I/O operations can
known as pipelining.
RAID level 1 stores duplicate copies of each strip, with each copy on a different
disk. The simplest organization consists of two disks, one being an exact duplicate of the
other. Read requests can be optimized if they are handled by the copy that can accessed
Sometimes write requests result in duplicate write operations, one for each copy. Writing
is not as efficient as reading; so the completion is delayed until the copy with slowest
17
Figure 8.8 RAID levels
18
Single copies of each strip are maintained in RAID levels 2 to 5. Redundant
correcting code (such as a Hamming code) is calculated for the corresponding bits on
each data disk. The bits of the code are stored on multiple drives. The strips are very
small, so when a block is read, all disks are accessed in parallel. RAID-3 is similar, but a
single parity bit is used instead of an error-correcting code. RAID-3 also requires just one
extra disk. If any disk fails in the array, its data can be regenerate from the data on the
remaining disks.
RAID level 4 is like to RAID-3, except strips are larger. So, an operation to read a
block involves only a single disk. Write operations require parity information to be
calculated, and writes must be performed on both the data and parity disks. As the parity
disk must be written whenever any data disk is written, the parity disk can be a bottleneck
among all the disks. RAID-5 eliminates the potential bottleneck found in RAID-4.
A RAM disk is a virtual block device created from main memory. Commands to
read or write disk blocks are implemented by the RAM disk device driver. Unlike real
disks, main memory provides direct access to data. Seek and rotational delay, generally
found on disk devices, do not exist in RAM disks. They are mainly useful in storing small
The two biggest disadvantages of RAM disks are its cost and volatility. To
implement a RAM disk, the operating system must reserve a section of memory. The
19
other major disadvantage is that when power is lost, memory contents wash off. If the
RAM disk is to store a file system, that file system must be remade each time the system
is booted. Any files stored on a RAM disk file system will be lost when the system is
rebooted. In case of a power failure, any important data stored on a RAM disk will be
lost.
Allocating a large contiguous section of memory for RAM disk use can be easily done
4. SUMMARY
The major secondary-storage I/O device in most computers is the disk drive.
Requests to access disk I/O are created by the virtual memory system and by the file
system. Every request generates the referred address on the disk, in the type of a logical
block number. Disk-scheduling policies may improve the general bandwidth, the average
response time, and the variation in response time. Policies like FIFO, SSTF, C-SCAN,
SCAN, LOOK, and C-LOOK are designed in order to decrease the total seek time.
RAID is used to enhance the reliability of disk. A RAM disk is a virtual block
device built from main memory. Commands to read or write disk blocks are processed by
Operating System Concepts, 5th Edition, Silberschatz A., Galvin P.B., John Wiley
& Sons.
20
Operating Systems, Madnick S.E., Donovan J.T., Tata McGraw Hill Publishing
2000.
Operating Systems, Harris J.A., Tata McGraw Hill Publishing Company Ltd.,
requests.
What do you understand by seek time, latency time, and transfer time? Explain.
Shortest Seek Time First favors tracks in the center of the disk. On an operating
system using Shortest Seek Time First, how might this affect the design of the file
system?
What is the difference between Look and C-Look? Discuss using suitable
example.
The entire disk scheduling algorithms except First Come First Serve may cause
o Explain why.
What do you understand by RAID? What are the objectives of it? Explain.
21
What are the limitations due to redundancy in RAID? What are the advantages of
redundancy? Explain.
Write a detailed note on different RAID organizations? Discuss their merits and
demerits also.
22
CS-DE-15
OPERATING SYSTEMS
WINDOWS-I
STRUCTURE
1. Introduction
2. Objectives
3. Presentation of Contents
3.1 Windows
3.1.6 Scrollbars
3.3 Desktop
3.3.1 Icon
1
3.3.3 Start button
3.3.4 My computer
3.3.5 My documents
4. Summary
2
1. INTRODUCTION
An operating system is an interface between hardware and user which is responsible for
the management and coordination of activities and the sharing of the resources of a
computer, that acts as a host for computing applications run on the machine. As a host,
one of the purposes of an operating system is to handle the resource allocation and access
In this lesson, basics of windows operating system have been discussed. Various versions
of windows operating system have also been discussed to give an inner sight about the
better look which can be distinguished in the various versions. This lesson is very helpful
2. OBJECTIVES
This lesson is designed to help you learn the basic commands and elements of Windows.
This lesson is not geared toward a certain version of Windows for instance Windows 95,
Windows 98, or Windows 2000, but discusses those aspects that are common throughout
all versions. If you are trying to catch your computer knowledge up to modern times, then
3
3. PRESENTATION OF CONTENTS
3.1 Windows
(GUIs). Microsoft Windows came to dominate the world's personal computer market,
overtaking Mac OS, which had been introduced previously. As of October 2009,
1. Every window has a title bar which displays the name of the window.
3. A window can be closed by pressing the x button at the right of the title bar.
4
Control Box The control box provides a menu that enables you to restore, move,
Border The border separates the window from the desktop. You resize the
contract it.
Title bar The title bar displays the name of the current file and the name of the
current program.
Minimize button Use the Minimize button to temporarily decrease the size of a
Maximize button Click the Maximize button and the window will fill the screen.
Restore button After you maximize a window, if you click the Restore button, the
Close button Click the Close button to exit the window and close the program.
Menu bar The menu bar displays the program menu. You send commands to
5
Toolbars Toolbars generally display right below the menu, but you can drag
them and display them along any of the window borders. You use the
Work area The work area is located in the center of the window. You perform
Status bar The status bar provides you with information about the status of your
program.
6
3.1.2 Switching between windows
If you have several windows open at the same time, the window on top is the window
with focus. You can only interact with the window with focus. To change windows, do
2. Hold down the Alt key and press the Tab key (Alt-Tab) until you have selected
3. All active files are displayed on the taskbar. Click the taskbar button for the
Cascading windows fan out across your desktop with the title bar of each window
showing.
7
3.1.4 Tiling windows
Tiling your windows is a way of organizing your windows onscreen. When you tile your
windows, Windows places each window on the desktop in such a way that no window
overlaps any other window. You can tile your windows horizontally or vertically.
prefer.
3.1.5 Scrollbars
In many programs, if the contents of the work area do not fit in the window, scrollbars
will appear. A vertical scrollbar will appear at the right side of the window and a
horizontal scrollbar at the bottom of the window, depending on the fit. The vertical
scrollbar provides a way to move up and down. The horizontal scrollbar provides a way
The scroll box indicates where you are in your document. If the scroll box is at the top of
the scrollbar, you are at the top of the document. If the scroll box is in the center of the
8
To move from side to side one character at a time:
To scroll continuously:
Click the appropriate arrow and hold down the mouse button.
Left-click the scrollbar and hold down the left mouse button until you arrive at the
location. For example, if you want to go to the center of the document, click the
center of the scrollbar and hold down the left mouse button.
Or drag the scroll box until you arrive at the desired location.
Windows 1.0
9
Windows 2.0
Windows 1.0, which is capable just to exhibit tiled windows. Windows 2.0 has
1.0.
Windows 3.0
Windows 3.0 was the 3rd most important production of Microsoft Windows
which was released on 22nd May 1990. It turned out to be the 1st broadly used
version of Windows.
Windows 3.1
Windows 3.1 (also known as Janus), had came up on March 18, 1992. This
version includes a TrueType inbuilt font system making the Windows a solemn
desktop issuing platform for the 1st time. Windows 3.0 could have similar
functionality with the use of the Adobe Type Manager (ATM) font system from
Adobe.
Windows NT
10
powerful high-level-language-based, processor-independent, multiprocessing,
Windows 95
Windows 98
1999. It includes fixes for many minor issues, improved USB support, and the
replacement of Internet Explorer 4.0 with relatively faster Internet Explorer 5.0.
Windows ME
Windows ME was the successor to Windows 98 and just like Windows 98, was
Windows Media Player 7, and the new Windows Movie Maker software, which
provided basic video editing and was designed to be easy for home users.
Windows 2000
Windows 2000 is an extension of the Windows 9x version, but with access to the
actual mode MS-DOS limited so as to get a move for the system boot time. It was
11
applications that require actual mode DOS to run those could not be made to run
on Windows ME.
Windows XP
general functional computer systems which includes business and home desktops,
October 2001.
Windows Vista
After a world-wide success of XP and its service packs Microsoft designed and
created Windows Vista the operating system for use on personal computers,
including business and home desktops, Tablet PCs, laptops and media centers. It
was first named as "Longhorn" but later on 22nd July 2005, the name was
November, 2006. In the next three months, Vista was available in steps to
organizations. It was released globally on 30th January 2007 for the general
public.
Windows 7
center PCs. Windows 7 was released to manufacturing on July 22, 2009, and
reached general retail availability on October 22, 2009, less than three years after
3.3 Desktop
applications, folders and shortcuts are located. Desktop contains the following items:
Icons
Taskbar
Start Button
3.3.1 Icon
An icon is a graphic image. Icons help you execute commands quickly. Commands tell
the computer what you want the computer to do. To execute a command by using an
Windows operating system uses different icons to represent files, folders and
applications. Icons found on the desktop are normally left aligned. The icons provided by
windows are:
1. My Documents
13
2. My Computer
3. My Network Places
4. Recycle Bin
5. Internet Explorer
The task bar is at the bottom of the desktop, but you can move it to the top or either side
of the screen by clicking and dragging it to the new location. Buttons representing
programs currently running on your computer appears on the task bar. At the very left of
the task bar is the start button. At the right side is an area called the system tray. Here you
will find graphical representation of various background operations. It also shows the
system clock.
Start button is found at the lower left corner of the screen. Click once on the start button
to open a menu of choices. Through this button, we can open the programs installed on
your computer and access all the utilities available in the windows environment.
We can shutdown, restart and/or standby the computer by using the start button.
14
Screenshot of Desktop
3.3.4 My Computer
My computer lets you browse the contents of your computer. The common tasks that we
2. Create, move, copy, delete or rename files, folders and programs from one
15
5. Add or remove a printer.
3.3.5 My Documents
other files that you want to access quickly. On the desktop, it is represented by a folder
with a sheet of paper in it. When you save a file in a program such as word pad or paint,
the file is by default saved in my documents unless you choose a different location.
The following steps may be followed to open a document from its window.
Recycle bin makes it easy to delete and undelete files and folders. When a file or folder is
deleted from any location, Windows stores it in the recycle bin. If a file is deleted
accidentally, you can move it back from the recycle bin. We can also empty recycle bin
Steps to move back the file or folder from the recycle bin:
16
2. Select the file or folder you want to move back.
5. Windows will move the file or folder back to the location from where it was deleted.
Programs
Favorites
Documents
Settings
Find
Help
Run
Shutdown
Programs
Place the mouse pointer to the programs entry and a sub menu will open, showing all
programs or applications currently installed. To open a program, which has been installed
17
Favorites
Favorites menu present a list of the Internet addresses that you have added to your
Documents
The Documents menu lists the files you have recently worked on. You can open the most
recently used document directly from here. To open a document from this list, simply
Settings
This menu provides the facility to change or configure the hardware or software settings
The individual icons in the Control Panel refer to a variety of tools to control the way of
your computer, its components presents information, as well as the tools to control the
The Find/Search
This option of the start menu helps in locating files or folders stored on the hard disk or
network.
This command is very helpful in case we forget the exact location of a file or folder that
we want to access. The search option presents different ways for finding a file or folder.
18
These options include search based on name, type, size, and date and storage location of
the file or folder. It opens a dialog box, where the user can type a name of the file or
folder to search for. The procedure of using this command is given below:
1. Click on Find option of the start menu, the Find dialog box will appear.
2. Enter the name of the file or folder in the Named text box.
3. From the Look in drop down list box choose the location where you imagine that your
5. If find dialog box successfully searches the location of the desired file or folder, it will
Help
To access the Help system of windows, you can select Help from the start menu. Help
option helps us how to use the commands and menus and in case of problems how to
Run
This command is used to execute a command or program directly instead of using the
icon or program menu. Press the "Browse" button to locate the program you want to open
19
Shut Down
Shutdown is a process in which computer closes all programs currently running and
disconnects the devices connected with it and turns it off. Following steps are followed to
4. Choose the shut down option from the list and click the "OK" button.
program, folder, document, or Internet location. Clicking on a shortcut icon takes you
directly to the object to which the shortcut points. Shortcut icons contain a small arrow in
Shortcuts are merely pointers and deleting a shortcut will not delete the item to which the
shortcut points.
20
1. Click Start. The Start menu will appear.
2. Locate the item to which you want to create a shortcut. If the item is located on a
3. Locate in Windows Explorer the item to which you want to create a shortcut.
4. Hold down the right mouse button and drag the item onto the desktop.
2. Click Properties.
5. Click OK.
Please note that icons can be changed. If you do not see the Change Icon button, the icon
cannot be changed.
21
3.4 Shortcut Keys
You can use shortcut keys to execute a command quickly by pressing key combinations
instead of selecting the commands directly from the menu or clicking on an icon. When
you look at a menu, you will notice that most of the options have one letter underlined.
You can select a menu option by holding down the Alt key and pressing the underlined
letter. You can also make Alt-key selections from drop-down menus and dialog boxes. A
key name followed by a dash and a letter means to hold down the key while pressing the
letter. For example, "Alt-f" means to hold down the Alt key while pressing "f" (this will
open the File menu in many programs). As another example, holding down the Ctrl key
while pressing "b" (Ctrl-b) will bold selected text in many programs. In some programs,
Windows Explorer is a place where you can view the drives on your computer and
manipulate the folders and files. Using Windows Explorer, you can cut, copy, paste,
1. Click the Start button, located in the lower left corner of your screen.
2. Highlight programs.
3. Highlight Accessories.
22
Alternatively, you can open Windows Explorer by holding down the Windows key and
typing e (Windows-e).
To add an item located in Windows Explorer to the Start menu or to a Program menu:
6. Click Add.
7. Type the path to the item you want to add, or use Browse to navigate to the item.
8. Click Next.
5. Click Customize.
23
6. Click the Remove button.
9. Click Yes.
3. Click Copy.
5. Click OK.
2. Right-click.
24
3. Click Delete. You will be prompted.
4. Click Yes.
To sort a menu:
1. Go to the menu.
2. Right-click.
4. SUMMARY
Through this lesson, you have learnt the features of Windows, Various versions of
Windows & its use. We discussed about the basics of windows operating system like My
Computer, Recycle bin, Desktop, Icons and Windows Explorer. In the next lesson, we
will discuss about the Toolbars, simple operations like copy, delete, moving of files and
folders from one drive to another and Windows Settings using Control Panel.
25
Microsoft Windows 2000 Core Requirements Training Kit- Microsoft Press,
Pogue.
Explain the start button of Windows operating system and what are the various
o Recycle Bin
o Taskbar
o My Computer
26
CS-DE-15
OPERATING SYSTEMS
STRUCTURE
1. Introduction
2. Objectives
3. Presentation of Contents
3.4 Drives
3.5 Accessories
4. Summary
1. INTRODUCTION
In the previous lesson, we have learnt the various versions of the windows operating systems and
its uses, and learnt something about the Desktop, Icons and Windows Explorer. This lesson is
intended to discuss some useful and advance activities in windows like windows accessories,
control panel, dialog box & tool bar. These are very important activities which help the system to
2. OBJECTIVES
In this lesson, we will explain about some advance operations on the windows operating system.
The basic objective of this lesson is to make you understand about the some core operating
system operations and its uses make you understand about the Dialog Boxes & Toolbars. Here
we will discuss the working of files & folders; simple operations like copy, delete, moving of
files and folders from one drive to another. Lastly, we will explain windows accessories and
A dialog box is a special window, used in user interfaces to display information to the user, or to
get a response if needed. They are so-called because they form a dialog between the computer
and the user, either informing the user of something, or requesting input from the user, or both. It
provides controls that allow you to specify how to carry out an action.
Dialog boxes consist of a title bar (to identify the command, feature, or program where a dialog
box came from), an optional main instruction (to explain the user's objective with the dialog
box), various controls in the content area (to present options), and commit buttons (to indicate
owner window. These dialog boxes are best used for critical or infrequent, one-off tasks
Modeless dialog boxes allow users to switch between the dialog box and the owner
window as desired. These dialog boxes are best used for frequent, repetitive, on-going
tasks.
A task dialog is a dialog box implemented using the task dialog application programming
interface (API). They consist of the following parts, which can be assembled in a variety of
combinations:
A title bar to identify the application or system feature where the dialog box came from.
A main instruction, with an optional icon, to identify the user's objective with the
dialog.
A command area for commit buttons, including a Cancel button, and optional more
A footnote area for optional additional explanations and help typically targeted at less
experienced users.
3.1.1 Design concepts of dialog box
When properly used, dialog boxes are a great way to give power and flexibility to your program.
When misused, dialog boxes are an easy way to annoy users, interrupt their flow, and make the
program feel indirect and tedious to use. Modal dialog boxes demand user’s attention. Dialog
boxes are often easier to implement than alternative UIs, so they tend to be overused.
A dialog box is most effective when its design characteristics match its usage. A dialog box's
design is largely determined by its purpose (to offer options, ask questions, provide information
or feedback), type (modal or modeless), and user interaction (required, optional response, or
acknowledgement), whereas its usage is largely determined by its context (user or program
Question dialogs (using buttons) ask users a single question or to confirm a command,
completely. Unlike question dialogs, choice dialogs can ask multiple questions.
Progress dialogs present users with progress feedback during a lengthy operation (longer
than five seconds), along with a command to cancel or stop the operation.
Informational dialogs display information requested by the user.
Tabs
Some programs provide us dialog boxes with several pages of options. We can move to a page
List Boxes
List boxes enable us to make a choice from a list of options. To make our selection, simply we
have to click the option we want to choose. In some list boxes, we can choose more than one
item. To choose more than one item, hold down the Ctrl key while you make your selections.
Radio buttons
A radio button is a type of graphical user interface element that allows the user to choose only
one of a predefined set of options .Windows XP and programs that run under Windows XP use
radio buttons to present a list of mutually exclusive options. We can select only one of the
options presented. Radio buttons are usually round. A dot in the middle indicates that the option
is selected.
Check boxes
Check box is a selection tool designed so that a user can choose one or more items from a list.
We can click the checkbox to select the item. An X or a checkmark appears in a selected box.
3.2 Toolbar
A toolbar is a set of icons or buttons that are part of a software program's interface or an open
window. When it is part of a program's interface, the toolbar typically sits directly under the
menu bar. For example, Adobe Photoshop includes a toolbar that allows you to adjust settings
for each selected tool. If the paintbrush is selected, the toolbar will provide options to change the
brush size, opacity, and flow. Microsoft Word has a toolbar with icons that allow you to open,
save, and print documents, as well as change the font, text size, and style of the text. Like many
programs, the Word toolbar can be customized by adding or deleting options. It can even be
The toolbar can also reside within an open window. For example, Web browsers, such as Internet
Explorer, include a toolbar in each open window. These toolbars have items such as Back and
Forward buttons, a Home button, and an address field. Some browsers allow you to customize
the items in toolbar by right-clicking within the toolbar and choosing "Customize..." or selecting
"Customize Toolbar" from the browser preferences. Open windows on the desktop may have
toolbars as well
Here we are giving one commonly used windows operating system toolbar i.e.
The Quick Access Toolbar is a customizable toolbar that contains a set of commands that are
independent of the tab that is currently displayed. You can move the Quick Access Toolbar
from one of the two possible locations, and you can add buttons that represent commands to
5. Right click on Quick launch taskbar, and point to Toolbars context menu item, and
then select New Toolbar….to add new toolbar to your quick launch toolbar.
Folders are used to organize the data stored on your drives. The files that make up a program are
stored together in their own set of folders, as we want to organize the files we create in folders.
Windows XP organizes folders and files in a hierarchical system. The drive is the highest level
of the hierarchy. You can put all of your files on the drive without creating any folders, but that
is like putting all of your papers in a file cabinet without organizing them into folders. It works
fine if you have only a few files, but as the number of files increases, there comes a point at
which things become very difficult to find. So you create folders and put related material
together in folders.
At the highest level, you have some folders and perhaps some files. You can open any of the
folders and put additional files and folders into them. This creates a hierarchy.
1. In the left pane, click the drive or folder in which you want to create the new folder.
2. Click any free area in the right pane. A context menu will appear.
3. Highlight New.
4. Click Folder.
1. Right-click the file or folder you want to delete. A context menu will appear.
2. Click Delete. Windows Explorer will ask, "Are sure you want to send this object to the
recycle bin?"
3. Click Yes.
1. Right-click the file or folder you want to copy. A context menu will appear.
1. Right-click the file or folder you want to cut. A context menu will appear.
1. After cutting or copying the file, right-click the object or right-click in the right pane of
the folder to which you want to paste. A context menu will appear.
2. Click Paste.
2. Click Rename.
To save a file:
1. Click File, which is located on the menu bar. A drop-down menu will appear.
2. Click Save. A dialog box similar to the one shown here will appear.
Field/Icon Entry
file.
Up One Level icon Click this icon to move up one level in the
folder hierarchy.
View Desktop icon Click this icon to move to the Desktop folder.
Create a New Folder icon Use the Create a New Folder icon to create a
new folder:
File Name field Enter the name you want your file to have in
this field.
Save As Type field Click to open the drop-down box and select a
file type.
3.5 Drives
Drives are used to store data. Almost all computers come with at least two drives: a hard drive
(which is used to store large volumes of data) and a CD drive (which stores smaller volumes of
data that can be easily transported from one computer to another). The hard drive is typically
designated the C:\ drive and the CD drive is typically designated the D:\ drive. If you have an
additional floppy drive, it is typically designated the A:\ drive. If your hard drive is partitioned or
if you have additional drives, the letters E:\, F:\, G:\ and so on are assigned.
The accessories are set of tools provided by Windows for configuring your system to meet our
vision, hearing and ability needs. Windows accessories enable us to maintain our system for
optimal performance.
Here is the way how we can use these Windows tools. Click:
The Magnifier
The Magnifier is a display utility that makes the computer screen more readable by people who
have low vision by creating a separate window that displays a magnified portion of the screen.
Magnifier provides a minimum level of functionality for people who have slight visual
impairments.
When you open the Magnifier a new window appears: The Magnifier Settings window Form
that window you can change the level of magnification, change the tracking and the presentation.
The Narrator
The Narrator is a text-to-speech utility for people who are blind or have low vision. Narrator
reads what is displayed on the screen—the contents of the active window, menu options, or text
The Narrator is designed to work with Notepad, WordPad, Control Panel programs, Internet
Explorer, the Windows desktop, and some parts of Windows Setup. Narrator may not read words
aloud correctly in other programs. Narrator has a number of options that allow you to customize
The narrator tool is available in Windows 2000 and newer versions of windows
allows people with mobility impairments to type data by using a pointing device or joystick.
Besides providing a minimum level of functionality for some people with mobility impairments,
The Control Panel is a part of the Microsoft Windows graphical user interface which allows
users to view and manipulate basic system settings and controls via applets, such as adding
hardware, adding and removing software, controlling user accounts, and changing accessibility
The Control Panel has been an inherent part of the Microsoft Windows operating system since its
first release (Windows 1.0), with many of the current applets being added in later versions.
Beginning with Windows 95, the Control Panel is implemented as a special folder, i.e. the folder
does not physically exist, but only contains shortcuts to various applets such as Add or Remove
Programs and Internet Options. Physically, these applets are stored as .cpl files. For example, the
Add or Remove Programs applet is stored under the name appwiz.cpl in the SYSTEM32 folder.
In Vista, the word 'Start' is only visible when you hover over the 'Start' icon.
The standard way to open Control Panel is through Start-Control Panel. There are two methods
of displaying the contents. One is called the "Category View" and displays tasks by generalized
control panel applets. The figure below shows the choice when "Performance and Maintenance"
is clicked.
A second way of displaying Control Panel is called the "Classical View" and displays icons for
individual applets. A partial view is shown below. Some of these applets may have several tabs
what they are for and how you can use them to improve your Windows experience. The
Accessibility Options – Here you can change settings for your keyboard, mouse, display and
sound.
Add Hardware – This will open the Add Hardware Wizard which will search your computer for
new hardware that you have installed when Windows does not recognize it on its own.
Add or Remove Programs – If you need to install or uninstall any software on your computer,
this is where you will do it. You should always uninstall software rather than delete it from your
hard drive.
Administrative Tools – This section of your Control Panel is used for administrative functions
such as managing your computer, monitoring performance, editing your security policy and
Automatic Updates – Here you tell Windows how and when to update itself. You can control
whether or not it downloads updates automatically or at all and when you want them installed or
Bluetooth Devices – If you are using any Bluetooth devices on your computer, here you can add,
Date and Time – This one explains itself. You can set your computer’s date, time and regional
settings here.
Display – The display settings allow you to change the way things appear on the screen. You can
adjust items like the screen resolution and color depth. Here you can select your background
Folder Options – This is where you can adjust the way you view your files and folders from
Fonts – The Fonts applet allows you to add, remove and manage fonts on your computer. It will
you can use this section to add, remove and troubleshoot the devices.
Internet Options – If you use Internet Explorer for your web browser you will go here to
change settings such for history, connections and security among other things.
Keyboard – Here you can adjust settings such as how fast the keyboard will repeat a character
Mail – The Mail applet lets you adjust your properties for your Outlook or Exchange email
settings.
Mouse – Here you can adjust your mouse setting for features such as double click speed, button
assignment and scrolling. You can also change your mouse pointers and effects as well as view
Network Connections – This item is where you can check and adjust your network connection
settings. It will take you to the same place as if you were to right click My Network Places and
choose properties. It will show all of your active network, dialup and wireless connections. There
Phone and Modem Options – If you have a modem installed on your system and uses it for
dialup connections or faxing you can change the settings here. The Dialing Rules tab allows you
to change settings for things such as dialing a number to get an outside line and setting up carrier
codes for long distance and using calling cards. The Modems tab allows you to add, remove and
changed the properties for installed modems. The Advanced tab is for setting up telephony
providers.
Power Options – Here you adjust the power settings of your computer. Windows has built in
power schemes for different settings such as when to turn off the monitor or hard drives and
when to go into standby mode. You can even create your own schemes and save them. The
advanced tab allows you to assign a password to bring the computer out of standby and tell the
computer what to do when the power or sleep buttons are pressed. If you want to enable
hibernation or configure an attached UPS then you can do it here as well. This area can also be
accessed from the display properties settings under the Screensaver tab.
Printers and Faxes – This area is where your printers are installed and where you would go to
manage their settings. It’s the same area that is off of the Start menu. There is an add printer
wizard which makes it easy to install new printers. To manage a printer you would simply right
Regional and Language Options – If you need to have multiple languages or formats for
Scanners and Cameras – Windows provides a central place to manage your attached scanners
and camera and adjust their settings. There is even a wizard to add new devices to make the
Scheduled Tasks – This item provides the ability for you to schedule certain programs to run at
certain times of the day. For example if you have a batch file you want to run every night you
can set it up here. You can also have it run a program at any scheduled interval you choose.
Security Center – The Windows Security Center checks the status of your computer for the stats
of your firewall, virus protection and automatic updates. A firewall helps protect your computer
by preventing unauthorized users from gaining access to it through a network or the Internet.
Antivirus software can help protect your computer against viruses and other security threats.
With Automatic Updates, Windows can routinely check for the latest important updates for your
Sounds and Devices – Here you can adjust your sound and speaker settings. The Volume tab
has settings to mute your system, have a volume icon placed in the taskbar and tell your
computer what type of speakers you are using such as a 5.1 system. The sounds tab lets you
adjust what sounds occur for what windows events. If you need to change what device is used for
playback and recording you can do it under the Audio tab. Voice playback and recording settings
are under the Voice tab. To troubleshoot your sound device you can use the Hardware tab. This
is where you can get information about your particular sound device.
Speech Properties – Windows has a feature for text to speech translation where the computer
will read text from documents using a computer voice that you can hear through your speakers.
The type of voice and speed of the speech can be adjusted here.
System – If you have ever right clicked My Computer and selected Properties then you have
used the System feature of Control Panel. This area gives you information about your computer’s
configuration, name and network status. You can click on the Hardware tab to view details about
hardware profiles and driver signing as well as get to Device Manager. The Advanced tab lets
you change settings for virtual memory (page files) and other performance settings. There is also
an area to change startup and recovery settings if needed. If you want to enable remote access to
your computer for Remote Desktop or Remote Assistance you can enable it here.
Taskbar and Start Menu – This is where you change the setting for your taskbar and Start
menu.
User Accounts – If you need to manage your local computer users then you need to go to user
accounts. You can add remove users and change the account types for users who log into your
system.
Windows Firewall – This is the same firewall setting described in the Windows Security Center
section.
Wireless Network Setup Wizard - This wizard is used to help you setup a security enabled
wireless network in which all of your computer and devices connect through a wireless access
point.
4. SUMMARY
In this lesson, we have learnt about the toolbar of the windows operating system and various
dialog boxes like Tabs, List Boxes, Radio button, Check Boxes etc. Here we have explained
some of the operations of files & folders, like how to save a file into the different drivers, how
organize a folder within computer. In this lesson, different accessories of the Windows operating
system like the magnifier, the narrator and the on screen keyboard have been explained. Lastly
What is dialog box? Explain the functions of dialog box using suitable examples.
LINUX
LESSON NO. 12
STRUCTURE
1. Introduction
2. Objectives
3. Presentation of Contents
4. Summary
1. INTRODUCTION
Linux is very similar to other operating systems, such as Windows and UNIX. But
something sets Linux apart from these operating systems. Since its inception in 1991,
Linux has grown to become a force in computing, powering everything from the New
1
On August 25, 1991, a Finn computer science student named Linus Torvalds made the
operating system (just a hobby, won't be big and professional like gnu) for 386(486) AT
clones”. Linux has gained strong popularity amongst UNIX developers, who like it for its
portability to many platforms, its similarity to UNIX, and its free software license.
Today, Linux is a multi-billion dollar industry, with companies and governments around
the world taking advantage of the operating system's security and flexibility. Thousands
of companies use Linux for day-to-day use, attracted by the lower licensing and support
costs. Governments around the world are deploying Linux to save money and time, with
Linux is open source as its source code is freely available. It is free to use. Linux was
designed considering UNIX compatibility. Its functionality list is quite similar to that of
UNIX.
Early in its development, Linux's source code was made available for free on the Internet.
As a result, its history has been one of collaboration by many users from all around the
world, corresponding almost exclusively over the Internet. From an initial kernel that
partially implemented a small subset of the UNIX system services, Linux has grown to
include evermore UNIX functionally. In its early days, Linux development revolved
largely around the central operating system kernel - the core, privileged executive that
manages all system resources and that interacts directly with the hardware.
Much more than this kernel is needed to produce a full operating system, of course. It is
useful to make the distinction between the Linux kernel and a Linux system. The kernel
in Linux is an entirely original piece of software developed from scratch by the Linux
some written from scratch, others borrowed from other development projects or created
2
in collaboration with other teams.
The basic Linux system is a standard environment for applications and for user
programming, but it does not enforce any standard means of managing the available
functionality as a whole. As Linux has matured, there has been a need for another layer of
functionality on top of the Linux system. A Linux distribution includes all the standard
components of the Linux system, plus a set of administrative tools to simplify the initial
installation and subsequent upgrading of Linux, and to manage installation and un-
includes tools for management of file systems, creation and management of user
2. Objectives
participating in the Linux economy share research and development costs with their
partners and competitors. This spreading of development burden amongst individuals and
companies has resulted in a large and efficient ecosystem and unheralded software
innovation.
In the present chapter, Linux operating system has been introduced along with its
3. Presentation of Contents
3
Kernel - Kernel is the core part of Linux. It is responsible for all major activities
with the underlying hardware. Kernel provides the required abstraction to hide
System Library - System libraries are special functions or programs using which
implements most of the functionalities of the operating system and do not requires
Figure 12.1
The Linux system is composed of three main bodies of code, in line with most traditional
UNIX implementations:
The kernel is responsible for maintaining all the important abstractions of the
4
The system libraries define a standard set of functions through which applications can
interact with the kernel, and which implement much of the operating-system
functionality that does not need the full privileges of kernel code.
The system utilities are programs that perform individual, specialized management
tasks. Some system utilities may be invoked just once to initialize and configure some
aspect of the system; others (known as daemons in UNIX terminology) may run
Figure 12.1 illustrates the various components that make up a full Linux system.
The most important distinction here is between the kernel and every- thing else. All the
kernel code executes in the processor's privileged mode with full access to all the
physical resources of the computer. Linux refers to this privileged mode as kernel mode,
equivalent to the monitor mode. Under Linux, no user-mode code is built into the kernel.
same way. Linux kernel and application programs support their installation on
Open Source - Linux source code is freely available and it is community based
Multi-User - Linux is a multiuser system means multiple users can access system
5
Multiprogramming - Linux is a multiprogramming system means multiple
Shell - Linux provides a special interpreter program which can be used to execute
supports a very strong security system. It enforces security at three levels. Firstly,
each user is assigned a login name and a password. So, only the valid users can
have access to the files and directories. Secondly, each file is bound around
permissions (read, write, execute). The file permissions decide who can read or
modify or execute a particular file. The permissions once decided for a file can
also be changed from time to time. Lastly, file encryption comes into picture. It
encodes your file in a format that cannot be very easily read. So, if anybody
happens to open your file, even then he will not be able to read the text of the file.
However, you can decode the file for reading its contents. The act of decoding a
Multitasking – Linux has the facility to carry out more than one job at the same
time. This feature of LINUX is called multitasking. You can keep typing in a
program in its editor while at the same time execute some other command given
earlier like copying a file, displaying the directory structure, etc. The latter job is
performed in the background and the earlier job in the foreground Multitasking is
6
achieved by dividing the CPU time intelligently between all the jobs that are
being carried out. Each job is carried out according to its priority number. Each
microseconds for its execution giving the impression that the tasks are being
Built-in Networking – LINUX has got built in networking support with a large
communication with other users. The users have the liberty of exchanging mail,
data, programs, etc. You can send your data at any place irrespective of the
(with work currently in progress on other platforms), and Linux is used in several
It has memory protection between processes, so that one program can't bring the
Demand loads executables: Linux only reads from disk those parts of a program
It uses virtual memory using paging (not swapping whole processes) to disk to a
separate partition or a file in the filesystem, or both, with the possibility of adding
7
more swapping areas during runtime (yes, they're still called swapping areas). A
total of 16 of these 128 MB (2GB in recent kernels) swapping areas can be used at
the same time, for a theoretical total of 2 GB of useable swap space. It is simple to
There is a unified memory pool for user programs and disk cache, so that all free
memory can be used for caching, and the cache can be reduced when running
large programs.
It has dynamically linked shared libraries (DLL’s) and static libraries too, of
course.
It does core dumps for post-mortem analysis, allowing the use of a debugger on a
program not only while it is running but also after it has crashed.
Linux is mostly compatible with POSIX, System V, and BSD at the source level.
All source code is available, including the whole kernel and all drivers, the
development tools and all user programs; also, all of it is freely distributable.
Plenty of commercial programs are being provided for Linux without source, but
everything that has been free, including the entire base operating system, is still
free.
It supports for many national or customized keyboards, and it is fairly easy to add
the console and you switch by pressing a hot-key combination. These are
8
It supports several common filesystems, including minix, Xenix, and all the
common system V filesystems, and has an advanced filesystem of its own, which
Linux has transparent access to MS-DOS partitions (or OS/2 FAT partitions) via a
special file system. You don't need any special commands to use the MS-DOS
partition; it looks just like a normal UNIX filesystem (except for funny
partitions do not work at this time without a patch (dmsdosfs). VFAT (WNT,
DOS filesystem.
It uses many networking protocols. The base protocols available in the latest
development kernels include TCP, IPv4, IPv6, AX.25, X.25, IPX, DDP
(Appletalk), Netrom, and others. Stable network protocols included in the stable
The interaction between the user and the hardware happens through the operating
system. The operating system interacts directly with the hardware. It provides common
services to programs and hides the hardware intricacies from them. The high level
architecture of the Linux system has been shown in next Figure 12.2.
9
The hardware of a Linux system is present in the centre of the diagram. It
provides the basic services such as memory management, processor execution level, etc.
Hardware layer - Hardware consists of all peripheral devices (RAM/ HDD/ CPU
etc).
10
Utilities - Utility programs giving user most of the functionalities of an operating
systems.
Steps to Login
Logging in is a procedure that tells the Linux System who you are; the system
responds by asking you the password. So, in order to login, first, connect your PC to the
Linux system. After a successful connection is established, you would find the following
Login:
Each user on the Linux system is assigned an account name, which identifies him
as a unique user. The account name has eight characters or less and is usually based on
the first name or the last name. It can have any combination of letters and numbers.
Thus, if you want to access the Linux resources, you should have your account
name first. If you don't as yet have an account name, ask the system administrator to
assign you one. Now, at the login prompt, enter your account name. Press Enter Key.
Type your account name in lowercase letters. Linux treats uppercase letters differently
Login: sanjay
Password: ******
Once the login name is entered, Linux prompts you to enter a password. While
you are entering your password, it will not be shown on the screen. This is just a security
measure adopted by Linux. The idea behind is that people standing around you are not
able to look through the secret password by looking at the screen. Be careful while you
are typing your password because you will not be able to see what you have typed.
However, if you give either the login name or the password wrong, then Linux denies
11
you the permission to access its resources. The system then shows an error message on
Login: sanjay
Password: ******
Login incorrect:
Login:
Many Linux systems give you three or four chances to enter your login and
password correct. So, key in your correct login name and the password again. Once you
have successfully logged on by giving a correct login and password, you are given some
information about the system, some news for users and a message whether you have an
Login: sanjay
Password:
The dollar sign is the Linux's method of telling that it's ready to accept commands
from the user. You can have a different prompt also in a case where your system is
configured for showing a different prompt. By default a $ is shown for the Korn or
Bourne Shells.
At this point, you are ready to enter your first Linux command. Now, when you
are done working on your Linux system and decide to leave your terminal - then it is
always a good idea to log off the system. It is very dangerous to leave your system
without logging out because some mischievous minds could tamper with your files and
directories. They can delete your files. They can also read through your private files.
12
Thus, logging off the system is always a better idea than turning off you terminal. In
$ exit
login:
The above command will work if you are using a Bourne or a Korn shell.
However, if you are working on C shell, exit will work or you can give another command
to log off.
$ logout
login:
There are a few of Linux commands, that you can type them standalone. For
example, ls, date, pwd, logout and so on. But Linux commands generally require some
The Linux commands follow the following format: Command [options] [arguments]
The options/arguments are specified within square brackets if they are optional. The
options are normally specified by a “-“ (hyphen) followed by letter, one letter per option.
There are many commands you will use regularly. Let us discuss some of these
commands:
date - To display and set the current system date and time.
cal Command
13
The cal command creates a calendar of the specified month for the specified year, if you
do not specify the month, it creates a calendar for the entire year. By default this
command shows the calendar for the current month based on the system date. The cal
where mm is the month, an integer between 1 and 12 and yy is the year; an integer
between 1 and 9999, For current years, a 4-digit number must be used, '98' will not
Options: None
Examples
(i) $ cal
The latter command displays the calendar for entire year, 1998. The entire year
The above command prints the calendar for the entire year onto the printer.
date Command
It shows or sets the system date and time. If no argument is specified, it displays the
%d displays only dd
14
%m displays only mm
Examples
(i) $date
If you are working in the superuser mode, you can set the date as shown below:
dd = day (1-31)
hh = hour (1-23)
mm = minutes (1-59)
yy = Year
It sets the system date and time to the value specified by the argument.
passwd Command
The passwd command allows the user to set or change the password. Passwords are set to
-x days. This sets the maximum number of days that the password will be date active.
After the specified number of days you will be required to give a new password.
-n days. This sets the minimum number of days the password has to be active, before it
can be changed.
15
-s: This gives you the status of the user's password.
Examples
$ passwd -x 40 bobby.
The above command will set the password of the user as 'bobby' which will be active for
a maximum of 40 days. Also note, that passwd program will prompt you twice to enter
your new password. If you don't type the same thing both the times, it will give you one
$passwd bobby
Old password:
New password:
who Command
The who command lists the users that are currently logged into the system.
Options:
i - this lists login-id and terminal of the user invoking this command.
Examples:
(i) $who –t
Output:
The second column here shows whether the user has write permission or not.
16
(ii) $who –u
Output:
(iii) $who am I
Output:
This command shows the account name, where and when I logged in. It also shows the
finger Command
In larger system, you may get a big list of users shown on the screen. The finger
command with an argument gives you more information about the user. The finger
command followed by an argument can give complete information for a user who is not
Examples
This command will give more information about sanjay's identity as shown below:
Directory: /home/sanjay
17
If you want to know about everyone currently logged onto the system, give the following
command:
$finger
File is a unit of storing information in a Linux system. All utilities, applications and data
are represented as files. The file may contain executable programs, texts or databases.
They are stored on secondary memory storage such as a disk or magnetic tape.
Naming Files
You can give filenames up to 14 characters long. The name may contain alphabets, digits
and a few special characters. Files in Linux do not have the concept of primary or
secondary name as in DOS, and therefore file names may contain more than one
period(.).
However, Linux file names are case sensitive. Therefore, the following names represent
Types
Ordinary files
Directory file.
Special files
18
Ordinary Files
Ordinary files are the one, with which we all are familiar. They may contain executable
programs, text or databases. You can add, modify or delete them or remove the file
entirely.
Directory Files
Directory files as discussed earlier also represent a group of files. They contain list of file
names and other information related to these files. Some of the commands, which
manipulate these directory files, differ from those for ordinary files.
Special Files
Special files are also referred to as device files. These files represent physical devices
such as terminals, disks, printers and tape-drives etc. These files are read from or written
into just like ordinary files, except that operation on these files activates some physical
devices. These files can be of two types Character device files and block device files. In
character device files data is handled character by character, as in case of terminals and
printers. In block device files, data is handled in large chunks of blocks, as in the case of
In Linux, we can refer to a group of files with the help of METACHARACTERS. These
are similar to wild card characters in DOS. The valid meta characters are ?, [, and ]. It
Examples
19
It will list all files starting with any character or characters and ending with the character
c.
(ii) $ ls robin[]
It will list all the files starting with robin and ending with any character.
(iii) $ ls x?yz[]
It will list all those files, in which the first character is x, the second character can
be anything, the third and fourth characters should be respectively y and z and that the
It will list all those files, in which the first character is I, the second character can
be either a, b or c the last two characters i.e., 3rd and 4th should be m and n respectively.
Alternatively the above command can also be given in the following manner.
$ ls I[a-c]mn
You should be very careful in using these meta characters while deleting files,
The data is centralized on a system working with Linux. However, if you do not
take care of your data, then it can be accessed by all other users who login. And there
cannot be anything private to a person or a group of persons. The first step towards data
security is the usage of passwords. The next step should be to guard the data among these
users. If the number of users is small, it is not much of a problem. But it can be
Linux can thus differentiate files belonging to an individual, the owner of a file or
group of users or the others with different limited accesses, as the case may be. The
20
Read(r) - You can just look through the file.
Therefore if you have a file called vendor.c and that you are the owner of it, you
may provide; yourself with all the rights rwx [read, write and execute]. You can provide
rx (read, and execute) rights to the members of your group and only the x (execute) right
to all others.
Normally, when you create a file, you are the owner of the file and your group
becomes the group id for the file. The system assigns a default set of permissions for file,
as set by the system administrator. The user can also change these permissions at his will.
But only a super user can change these permissions (rwx), ownership and group id's of
Similarly, giving the write (w) permission to an executable file does not carry any sense.
The execute (x) permission on directories mean that you can search through the directory
and the write (w) permission means that you can create or remove files in the directory.
The read (r) permission means that you can list the files in the directory.
Now, let us discuss some of the commonly used file and directory commands as
listed below.
21
chmod – changes the access modes of a file, etc.
The ls Command
The ls command is used for listing information about files and directories.
Here, filename can be the name of the directory or file. If it is directory, it lists
information about all the files in the directory and if it is a file, it lists information about
the file specified. You can also use meta characters to choose specific files.
Options:
-l - Lists in the long or detailed format owner's id only in the long listing.
-s - This lists the disk blocks (of 512 bytes each), occupied by a file.
You can make use of more than one option at a given time, just group them together and
Examples
Output: mkt
22
The above command gives a long listing of the mkt directory, which is inside the
(iii) $ ls –l /dev
The above command gives a typical listing of the dev directory in the root as
illustrated below:
The cp Command
Options: None
The cp command of Linux copies one file to another or one or more files to a
directory. Here, the file1 is copied as file2. If file2 already exists, the new file overwrites
it. The file names specified may be full path names or just the name (current working
Examples
Here the file 'mkt.c' which is present in the current directory will be copied as
23
The above command will copy all the files ending with the letter '.c' to the
The mv Command
Options: None
The mv command moves a file from one directory to the another directory. Here
file1 refers to the source filename and 'file2' refers to the destination filename. Moving a
file to another within the same directory is equivalent to renaming the file. Otherwise
also, mv doesn't really move the file, it just renames it and changes directory entries.
Examples
This command will move all the files ending with the letter '.c' to the directory
called 'mkt'.
Here, mv will rename the file 'mkt.c' present in the current working directory as
The ln Command
The 'ln' command adds one or more links to a file. Syntax: ln filel file2
The ln command here establishes a link to an existing file. File name 'file1'
specifies the file that has to be linked and file name 'file2' specifies the directory into
which the link has to be established. If the 'file2' is in the same directory as file1 then the
file seems to carry names, but physically there is only one copy. If you use the ls -li
command, you will find that the link count has been incremented by one and that both the
files have the same inode number, as they refer to the same data blocks in the disk. Any
24
changes that are made to one file will be reflected in the other. And if 'file2' specifies a
different directory, the file will be physically present at one place but will appear as if it
is present in other directory, thereby allowing different users to access the file. It saves a
But you should note that you should have write permission to the directory under
This will create a link for file mkt.c in 'mkt' directory to 'mktl' directory by the
name 'new-mkt.c'.
The above command links the file 'myfile.prg' as 'new-file.prg' in the same
The rm Command
This command is used for removing either a single file or a group of files. When
you remove a file, you are actually removing a link. The space occupied by the file on the
disk is freed, only thin when you remove the last link to a file.
The Options:
r - deletes the directory and its contents along with all the sub-directories and their
contents.
Examples
25
The above command will remove all the files and sub-directories (and their
(ii) $ rm /usr/mkt/*.c
This will remove all the ‘c’ program files (.c) from the mkt directory.
The cat writes the contents of one of more files, onto the screen in the sequence specified.
If you do not specify an input file, cat reads its data from the standard input file, generally
the keyboard.
Examples
This command will display the contents of ‘c’ program file ‘new-mkt.c’ onto the screen.
Only a superuser can change these permissions for any file on the system. In the syntax,
‘for whom’ denotes the user type, and can be a user or the file owner
a - all users
'operation' denotes the options to be done, and can be: + add permission
- remove permission
= assign permission
r - read permission
w - write permission
x - execute permission
26
‘filename(s)’ can be the files on which you want to carry out this command.
(i) First see the file permissions using the ls -1 command for mkt.c as shown below:
$ls -l mkt.c
i.e. user has rwx, group has x and all others also have x permission.
The above command remove execute (x) permission for user, give write (w) permission
(iii) Then again use the ls -l command to verify, if the permissions have been or not
set.
$1s -1 mkt.c
Alternatively, you could have also used the following commands to do the same work:
If we use
$ ls -l mkt.c
In the above command, a=rwx assigns read, write and execute permission to all users.
The chown command changes the owner of the specified file(s). Syntax: chown new-
owner filename. The chown command changes the owner of the specified file (s). This
command requires you to be in the superuser mode. The new owner can be the user ID of
27
the new owner or the new owner's user number. You can also specify the owner by his
name. But the new owner should have an entry in the /etc/passwd file. The filename is the
Options: None
Examples
The above command now makes bobby the owner of sales.c file.
Only the superuser can use this command. This command changes the group ownership
of a file. Here group denotes the new group-ID and filename denotes the file whose
Example
This changes the group-ID of the file 'sales.c' to the group called 'sanjay'.
Following is a quick guide which lists commands, their syntax and brief description. For
$man command
28
Command Description
Manipulating data:
The contents of files can be compared and altered with the following commands.
Command Description
29
awk Pattern scanning and processing language
tr Translate characters
30
emacs GNU project Emacs
Compressed Files:
Files may be compressed to save space. Compressed files can be created and examined:
Command Description
Getting Information:
Various Linux manuals and documentation are available on-line. The following Shell
Command Description
31
Network Communication:
These following commands are used to send and receive files from a local Linux hosts to
Command Description
Some of these commands may be restricted at your computer for security reasons.
The Linux systems support on-screen messages to other users and world-wide electronic
mail:
Command Description
32
Parcel Send files to another user
Programming Utilities:
The following programming tools and languages are available based on what you have
Command Description
cb C program beautifier
33
py Python language interpreter
Misc Commands:
Command Description
34
groups Show group memberships
35
umask Show the permissions that are given to view files by default
4. Summary
Linux is a very user friendly and powerful open source operating system, which is always
having the scope for further improvements. It has the features similar to that of Unix
Operating System, but at the same time helps the users with the GUI based environment,
Operating System Concepts, 5th Edition, Silberschatz A., Galvin P.B., John Wiley
& Sons.
2000.
Operating Systems, Harris J.A., Tata McGraw Hill Publishing Company Ltd.,
36
Discuss the architecture of Linux operating system.
37